Feb 19 05:25:02 crc systemd[1]: Starting Kubernetes Kubelet... Feb 19 05:25:02 crc restorecon[4773]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 05:25:02 crc restorecon[4773]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 05:25:03 crc restorecon[4773]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 05:25:03 crc restorecon[4773]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 19 05:25:04 crc kubenswrapper[5012]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 05:25:04 crc kubenswrapper[5012]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 05:25:04 crc kubenswrapper[5012]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 05:25:04 crc kubenswrapper[5012]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 05:25:04 crc kubenswrapper[5012]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 19 05:25:04 crc kubenswrapper[5012]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.435878 5012 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441283 5012 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441340 5012 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441351 5012 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441360 5012 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441369 5012 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441378 5012 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441387 5012 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441394 5012 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441402 5012 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441411 5012 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441419 5012 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441442 5012 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441451 5012 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441458 5012 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441466 5012 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441475 5012 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441483 5012 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441491 5012 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441502 5012 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441512 5012 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441524 5012 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441535 5012 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441544 5012 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441553 5012 feature_gate.go:330] unrecognized feature gate: Example Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441563 5012 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441572 5012 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441580 5012 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441591 5012 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441601 5012 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441609 5012 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441620 5012 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441630 5012 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441638 5012 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441664 5012 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441671 5012 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441679 5012 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441687 5012 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441696 5012 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441703 5012 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441711 5012 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441719 5012 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441728 5012 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441735 5012 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441743 5012 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441751 5012 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441758 5012 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441767 5012 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441779 5012 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441789 5012 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441797 5012 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441805 5012 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441812 5012 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441820 5012 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441828 5012 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441837 5012 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441846 5012 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441854 5012 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441862 5012 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441870 5012 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441878 5012 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441886 5012 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441894 5012 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441902 5012 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441910 5012 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441918 5012 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441926 5012 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441933 5012 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441945 5012 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441952 5012 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441960 5012 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.441968 5012 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.443873 5012 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.443897 5012 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.443924 5012 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.443936 5012 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.443947 5012 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.443957 5012 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.443969 5012 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.443981 5012 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.443991 5012 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444001 5012 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444011 5012 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444020 5012 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444030 5012 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444039 5012 flags.go:64] FLAG: --cgroup-root="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444048 5012 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444057 5012 flags.go:64] FLAG: --client-ca-file="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444067 5012 flags.go:64] FLAG: --cloud-config="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444076 5012 flags.go:64] FLAG: --cloud-provider="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444085 5012 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444098 5012 flags.go:64] FLAG: --cluster-domain="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444107 5012 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444116 5012 flags.go:64] FLAG: --config-dir="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444125 5012 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444137 5012 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444150 5012 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444159 5012 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444168 5012 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444178 5012 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444187 5012 flags.go:64] FLAG: --contention-profiling="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444196 5012 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444205 5012 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444215 5012 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444229 5012 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444241 5012 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444250 5012 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444259 5012 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444268 5012 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444277 5012 flags.go:64] FLAG: --enable-server="true" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444286 5012 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444329 5012 flags.go:64] FLAG: --event-burst="100" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444339 5012 flags.go:64] FLAG: --event-qps="50" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444348 5012 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444358 5012 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444367 5012 flags.go:64] FLAG: --eviction-hard="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444378 5012 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444387 5012 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444397 5012 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444406 5012 flags.go:64] FLAG: --eviction-soft="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444415 5012 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444424 5012 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444433 5012 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444442 5012 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444451 5012 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444460 5012 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444470 5012 flags.go:64] FLAG: --feature-gates="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444483 5012 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444493 5012 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444502 5012 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444513 5012 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444523 5012 flags.go:64] FLAG: --healthz-port="10248" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444532 5012 flags.go:64] FLAG: --help="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444541 5012 flags.go:64] FLAG: --hostname-override="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444550 5012 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444559 5012 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444569 5012 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444578 5012 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444587 5012 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444596 5012 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444612 5012 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444621 5012 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444630 5012 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444639 5012 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444649 5012 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444658 5012 flags.go:64] FLAG: --kube-reserved="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444667 5012 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444676 5012 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444685 5012 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444694 5012 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444703 5012 flags.go:64] FLAG: --lock-file="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444712 5012 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444721 5012 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444730 5012 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444745 5012 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444754 5012 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444763 5012 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444772 5012 flags.go:64] FLAG: --logging-format="text" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444782 5012 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444793 5012 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444802 5012 flags.go:64] FLAG: --manifest-url="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444811 5012 flags.go:64] FLAG: --manifest-url-header="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444823 5012 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444834 5012 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444845 5012 flags.go:64] FLAG: --max-pods="110" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444855 5012 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444865 5012 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444874 5012 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444884 5012 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444893 5012 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444902 5012 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444912 5012 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444933 5012 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444943 5012 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444952 5012 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444962 5012 flags.go:64] FLAG: --pod-cidr="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444972 5012 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444986 5012 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.444995 5012 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445004 5012 flags.go:64] FLAG: --pods-per-core="0" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445013 5012 flags.go:64] FLAG: --port="10250" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445023 5012 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445032 5012 flags.go:64] FLAG: --provider-id="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445041 5012 flags.go:64] FLAG: --qos-reserved="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445050 5012 flags.go:64] FLAG: --read-only-port="10255" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445059 5012 flags.go:64] FLAG: --register-node="true" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445068 5012 flags.go:64] FLAG: --register-schedulable="true" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445077 5012 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445093 5012 flags.go:64] FLAG: --registry-burst="10" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445102 5012 flags.go:64] FLAG: --registry-qps="5" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445111 5012 flags.go:64] FLAG: --reserved-cpus="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445119 5012 flags.go:64] FLAG: --reserved-memory="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445131 5012 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445140 5012 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445149 5012 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445159 5012 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445168 5012 flags.go:64] FLAG: --runonce="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445177 5012 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445186 5012 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445195 5012 flags.go:64] FLAG: --seccomp-default="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445204 5012 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445213 5012 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445222 5012 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445231 5012 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445241 5012 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445250 5012 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445258 5012 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445267 5012 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445276 5012 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445288 5012 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445297 5012 flags.go:64] FLAG: --system-cgroups="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445330 5012 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445344 5012 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445353 5012 flags.go:64] FLAG: --tls-cert-file="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445363 5012 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445375 5012 flags.go:64] FLAG: --tls-min-version="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445384 5012 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445393 5012 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445402 5012 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445411 5012 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445421 5012 flags.go:64] FLAG: --v="2" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445432 5012 flags.go:64] FLAG: --version="false" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445444 5012 flags.go:64] FLAG: --vmodule="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445456 5012 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.445465 5012 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445721 5012 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445736 5012 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445748 5012 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445757 5012 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445766 5012 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445774 5012 feature_gate.go:330] unrecognized feature gate: Example Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445782 5012 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445790 5012 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445798 5012 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445805 5012 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445813 5012 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445821 5012 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445829 5012 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445837 5012 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445847 5012 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445857 5012 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445874 5012 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445883 5012 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445892 5012 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445900 5012 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445909 5012 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445918 5012 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445926 5012 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445935 5012 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445943 5012 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445954 5012 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445964 5012 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445973 5012 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445982 5012 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445990 5012 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.445998 5012 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446006 5012 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446014 5012 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446022 5012 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446030 5012 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446038 5012 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446046 5012 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446054 5012 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446062 5012 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446070 5012 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446078 5012 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446086 5012 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446095 5012 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446103 5012 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446111 5012 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446119 5012 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446127 5012 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446138 5012 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446150 5012 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446158 5012 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446167 5012 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446175 5012 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446183 5012 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446190 5012 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446198 5012 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446206 5012 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446214 5012 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446222 5012 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446230 5012 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446240 5012 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446248 5012 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446256 5012 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446263 5012 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446272 5012 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446280 5012 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446287 5012 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446295 5012 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446326 5012 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446335 5012 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446342 5012 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.446350 5012 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.447195 5012 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.461421 5012 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.461484 5012 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461624 5012 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461648 5012 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461659 5012 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461669 5012 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461679 5012 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461715 5012 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461725 5012 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461734 5012 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461743 5012 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461756 5012 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461770 5012 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461781 5012 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461790 5012 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461799 5012 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461809 5012 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461817 5012 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461826 5012 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461834 5012 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461846 5012 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461857 5012 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461866 5012 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461874 5012 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461882 5012 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461891 5012 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461899 5012 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461908 5012 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461918 5012 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461927 5012 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461936 5012 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461945 5012 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461954 5012 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461962 5012 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461970 5012 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461978 5012 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461989 5012 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.461998 5012 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462006 5012 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462015 5012 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462023 5012 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462031 5012 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462039 5012 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462048 5012 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462056 5012 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462065 5012 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462073 5012 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462081 5012 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462089 5012 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462098 5012 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462107 5012 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462115 5012 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462124 5012 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462132 5012 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462140 5012 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462149 5012 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462160 5012 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462171 5012 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462182 5012 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462190 5012 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462199 5012 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462207 5012 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462215 5012 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462224 5012 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462232 5012 feature_gate.go:330] unrecognized feature gate: Example Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462241 5012 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462250 5012 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462258 5012 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462267 5012 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462275 5012 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462283 5012 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462341 5012 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462353 5012 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.462370 5012 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462610 5012 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462624 5012 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462634 5012 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462645 5012 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462659 5012 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462668 5012 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462677 5012 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462686 5012 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462694 5012 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462703 5012 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462711 5012 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462720 5012 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462728 5012 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462737 5012 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462746 5012 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462757 5012 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462765 5012 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462774 5012 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462782 5012 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462791 5012 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462799 5012 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462810 5012 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462820 5012 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462828 5012 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462838 5012 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462847 5012 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462855 5012 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462864 5012 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462872 5012 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462880 5012 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462889 5012 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462897 5012 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462906 5012 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462914 5012 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462923 5012 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462936 5012 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462946 5012 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462956 5012 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462967 5012 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462977 5012 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462986 5012 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.462995 5012 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463003 5012 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463012 5012 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463020 5012 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463028 5012 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463037 5012 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463046 5012 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463054 5012 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463063 5012 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463071 5012 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463079 5012 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463088 5012 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463096 5012 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463104 5012 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463113 5012 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463121 5012 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463129 5012 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463138 5012 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463146 5012 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463154 5012 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463163 5012 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463171 5012 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463180 5012 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463188 5012 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463196 5012 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463204 5012 feature_gate.go:330] unrecognized feature gate: Example Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463213 5012 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463222 5012 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463230 5012 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.463240 5012 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.463253 5012 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.464345 5012 server.go:940] "Client rotation is on, will bootstrap in background" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.477108 5012 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.477375 5012 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.479586 5012 server.go:997] "Starting client certificate rotation" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.479638 5012 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.479852 5012 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-23 07:22:13.765294666 +0000 UTC Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.479971 5012 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.506644 5012 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 05:25:04 crc kubenswrapper[5012]: E0219 05:25:04.509593 5012 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.510758 5012 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.525927 5012 log.go:25] "Validated CRI v1 runtime API" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.573213 5012 log.go:25] "Validated CRI v1 image API" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.576056 5012 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.583135 5012 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-19-05-20-06-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.583210 5012 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.601142 5012 manager.go:217] Machine: {Timestamp:2026-02-19 05:25:04.598090029 +0000 UTC m=+0.631412618 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:61bedd06-2cec-4dca-b6dd-2763eca77472 BootID:30666371-bab3-4856-be6a-83da7a1b9e4e Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a0:c8:3d Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a0:c8:3d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:3c:7c:6c Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c1:45:ca Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:34:d9:f7 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:82:4e:0d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ca:e6:9d:5c:f0:ac Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:8a:ef:e4:eb:08:4d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.601878 5012 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.602914 5012 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.605813 5012 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.606196 5012 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.606276 5012 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.606642 5012 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.606663 5012 container_manager_linux.go:303] "Creating device plugin manager" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.607209 5012 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.607262 5012 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.607740 5012 state_mem.go:36] "Initialized new in-memory state store" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.608403 5012 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.613567 5012 kubelet.go:418] "Attempting to sync node with API server" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.613602 5012 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.613629 5012 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.613663 5012 kubelet.go:324] "Adding apiserver pod source" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.613682 5012 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.618104 5012 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.619392 5012 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.621525 5012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.621533 5012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Feb 19 05:25:04 crc kubenswrapper[5012]: E0219 05:25:04.621631 5012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Feb 19 05:25:04 crc kubenswrapper[5012]: E0219 05:25:04.621664 5012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.622466 5012 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.624063 5012 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.624086 5012 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.624094 5012 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.624102 5012 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.624114 5012 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.624122 5012 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.624130 5012 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.624141 5012 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.624150 5012 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.624159 5012 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.624170 5012 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.624176 5012 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.624208 5012 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.624753 5012 server.go:1280] "Started kubelet" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.625129 5012 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.625162 5012 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.626396 5012 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 05:25:04 crc systemd[1]: Started Kubernetes Kubelet. Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.627200 5012 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.629559 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.629624 5012 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.629664 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:05:56.128781604 +0000 UTC Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.630009 5012 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.630175 5012 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.630381 5012 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 19 05:25:04 crc kubenswrapper[5012]: E0219 05:25:04.630051 5012 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.630719 5012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Feb 19 05:25:04 crc kubenswrapper[5012]: E0219 05:25:04.630793 5012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Feb 19 05:25:04 crc kubenswrapper[5012]: E0219 05:25:04.630914 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="200ms" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.631917 5012 factory.go:55] Registering systemd factory Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.631972 5012 factory.go:221] Registration of the systemd container factory successfully Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.632441 5012 factory.go:153] Registering CRI-O factory Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.632470 5012 factory.go:221] Registration of the crio container factory successfully Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.632537 5012 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.632564 5012 factory.go:103] Registering Raw factory Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.632471 5012 server.go:460] "Adding debug handlers to kubelet server" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.632584 5012 manager.go:1196] Started watching for new ooms in manager Feb 19 05:25:04 crc kubenswrapper[5012]: E0219 05:25:04.640976 5012 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18958e7f0453496f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 05:25:04.624716143 +0000 UTC m=+0.658038712,LastTimestamp:2026-02-19 05:25:04.624716143 +0000 UTC m=+0.658038712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.642715 5012 manager.go:319] Starting recovery of all containers Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.650774 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.650873 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.650899 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.650923 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.650947 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.650967 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.650991 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651012 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651037 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651057 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651077 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651100 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651121 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651145 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651168 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651191 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651213 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651233 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651253 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651273 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651294 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651340 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651364 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651387 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651410 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651430 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651454 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651476 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651497 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651558 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651582 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651604 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651753 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651774 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651795 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651817 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651838 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651858 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651879 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.651901 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.654657 5012 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.654932 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.655086 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.655225 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.655444 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.655591 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.655710 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.655828 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.655947 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.656131 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.656256 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.656406 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.656525 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.656651 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.656789 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.656911 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.657042 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.657162 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.657291 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.657465 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.657591 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.657728 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.657845 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.657962 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.658076 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.658248 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.658406 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.658527 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.658645 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.658762 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.658900 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.659019 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.659137 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.659264 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.659436 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.659570 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.659690 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.659820 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.659940 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.660061 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.660188 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.660353 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.660520 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.660690 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.660864 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.661029 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.661173 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.661363 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.661535 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.661696 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.661867 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.662070 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.662293 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.662524 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.662693 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.662874 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.663059 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.663195 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.663403 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.663540 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.663659 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.663777 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.664117 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.664333 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.664511 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.664693 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.664861 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.665044 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.665223 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.665523 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.665701 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.665863 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.666049 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.666242 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.666522 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.666676 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.666795 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.666920 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.667106 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.667235 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.667386 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.667509 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.667661 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.667796 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.667921 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.668042 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.668163 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.668277 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.668470 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.668606 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.668758 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669518 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669571 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669648 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669672 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669698 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669718 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669739 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669762 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669780 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669801 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669829 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669852 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669874 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669893 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669914 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669934 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669955 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669976 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.669999 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670023 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670045 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670068 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670089 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670112 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670134 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670157 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670197 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670220 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670240 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670263 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670283 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670329 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670351 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670375 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670396 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670417 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670437 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670457 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670479 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670499 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670520 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670540 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670562 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670585 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670605 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670626 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670646 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670666 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670688 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670707 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670730 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670750 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670773 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670797 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670826 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670855 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670875 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670899 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670922 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670944 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670965 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.670985 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671005 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671026 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671045 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671065 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671088 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671110 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671131 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671150 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671171 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671192 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671212 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671234 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671260 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671286 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671331 5012 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671352 5012 reconstruct.go:97] "Volume reconstruction finished" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.671368 5012 reconciler.go:26] "Reconciler: start to sync state" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.675972 5012 manager.go:324] Recovery completed Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.691002 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.693170 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.693218 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.693232 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.696448 5012 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.699257 5012 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.699289 5012 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.699349 5012 state_mem.go:36] "Initialized new in-memory state store" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.699854 5012 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.701546 5012 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.701599 5012 kubelet.go:2335] "Starting kubelet main sync loop" Feb 19 05:25:04 crc kubenswrapper[5012]: E0219 05:25:04.701665 5012 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 19 05:25:04 crc kubenswrapper[5012]: W0219 05:25:04.702968 5012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Feb 19 05:25:04 crc kubenswrapper[5012]: E0219 05:25:04.703061 5012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.723183 5012 policy_none.go:49] "None policy: Start" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.724542 5012 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.724583 5012 state_mem.go:35] "Initializing new in-memory state store" Feb 19 05:25:04 crc kubenswrapper[5012]: E0219 05:25:04.730677 5012 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.792210 5012 manager.go:334] "Starting Device Plugin manager" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.792519 5012 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.792556 5012 server.go:79] "Starting device plugin registration server" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.793253 5012 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.793283 5012 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.793950 5012 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.794112 5012 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.794124 5012 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.801887 5012 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Feb 19 05:25:04 crc kubenswrapper[5012]: E0219 05:25:04.804470 5012 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.805881 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.809658 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.809718 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.809739 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.809953 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.810284 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.810382 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.811503 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.811597 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.811622 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.811743 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.811806 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.811827 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.811938 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.812240 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.812362 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.813492 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.813558 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.813577 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.813739 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.813779 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.813794 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.814093 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.814168 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.814338 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.815657 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.815715 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.815742 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.816014 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.816056 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.816077 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.816288 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.816451 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.816509 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.817499 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.817548 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.817567 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.817690 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.817724 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.817741 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.817975 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.818029 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.818987 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.819029 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.819046 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:04 crc kubenswrapper[5012]: E0219 05:25:04.832193 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="400ms" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.874235 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.874339 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.874383 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.874418 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.874452 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.874582 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.874733 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.874771 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.874839 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.874903 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.874937 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.875002 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.875036 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.875104 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.875163 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.895885 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.897705 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.897753 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.897768 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.897803 5012 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 05:25:04 crc kubenswrapper[5012]: E0219 05:25:04.898551 5012 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977286 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977367 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977391 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977417 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977439 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977462 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977489 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977510 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977536 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977556 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977577 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977597 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977617 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977637 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977634 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977659 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.977779 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.978206 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.978066 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.978090 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.978138 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.978184 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.978290 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.978118 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.978341 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.978409 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.978426 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.978432 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.978448 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 05:25:04 crc kubenswrapper[5012]: I0219 05:25:04.978463 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.098998 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.100968 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.101049 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.101071 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.101120 5012 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 05:25:05 crc kubenswrapper[5012]: E0219 05:25:05.101980 5012 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.156379 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.179203 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.194211 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 05:25:05 crc kubenswrapper[5012]: W0219 05:25:05.208272 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-aa6cb4994fce0e7fe7b6aa95c9d202e67465255f154aab24c4b4ccd3779b26d7 WatchSource:0}: Error finding container aa6cb4994fce0e7fe7b6aa95c9d202e67465255f154aab24c4b4ccd3779b26d7: Status 404 returned error can't find the container with id aa6cb4994fce0e7fe7b6aa95c9d202e67465255f154aab24c4b4ccd3779b26d7 Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.211577 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.219596 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 05:25:05 crc kubenswrapper[5012]: W0219 05:25:05.222488 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-df098398500e146c4fad86acf450b5a3a65836432242dc8ed0f63979f99d83c9 WatchSource:0}: Error finding container df098398500e146c4fad86acf450b5a3a65836432242dc8ed0f63979f99d83c9: Status 404 returned error can't find the container with id df098398500e146c4fad86acf450b5a3a65836432242dc8ed0f63979f99d83c9 Feb 19 05:25:05 crc kubenswrapper[5012]: E0219 05:25:05.232978 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="800ms" Feb 19 05:25:05 crc kubenswrapper[5012]: W0219 05:25:05.237431 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-2f4e611fcffc46d79cef0666991d7af0a2cdef6b4c6a1619b675fc17105d57f3 WatchSource:0}: Error finding container 2f4e611fcffc46d79cef0666991d7af0a2cdef6b4c6a1619b675fc17105d57f3: Status 404 returned error can't find the container with id 2f4e611fcffc46d79cef0666991d7af0a2cdef6b4c6a1619b675fc17105d57f3 Feb 19 05:25:05 crc kubenswrapper[5012]: W0219 05:25:05.239461 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-34a9b67e25282ac58cdc8c5071816acbee7a2c427079f772302a933be829f5ef WatchSource:0}: Error finding container 34a9b67e25282ac58cdc8c5071816acbee7a2c427079f772302a933be829f5ef: Status 404 returned error can't find the container with id 34a9b67e25282ac58cdc8c5071816acbee7a2c427079f772302a933be829f5ef Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.502130 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.504346 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.504391 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.504403 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.504427 5012 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 05:25:05 crc kubenswrapper[5012]: E0219 05:25:05.504951 5012 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Feb 19 05:25:05 crc kubenswrapper[5012]: W0219 05:25:05.544156 5012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Feb 19 05:25:05 crc kubenswrapper[5012]: E0219 05:25:05.544290 5012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Feb 19 05:25:05 crc kubenswrapper[5012]: W0219 05:25:05.554853 5012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Feb 19 05:25:05 crc kubenswrapper[5012]: E0219 05:25:05.554947 5012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Feb 19 05:25:05 crc kubenswrapper[5012]: E0219 05:25:05.588298 5012 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18958e7f0453496f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 05:25:04.624716143 +0000 UTC m=+0.658038712,LastTimestamp:2026-02-19 05:25:04.624716143 +0000 UTC m=+0.658038712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.628446 5012 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.630434 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:21:30.989001118 +0000 UTC Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.708773 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0dbdd3bebff607e5e2b17dc9be457bf2d28c9216b919b8e4e25c675fa3e647b5"} Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.710515 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"34a9b67e25282ac58cdc8c5071816acbee7a2c427079f772302a933be829f5ef"} Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.711790 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2f4e611fcffc46d79cef0666991d7af0a2cdef6b4c6a1619b675fc17105d57f3"} Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.713343 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"df098398500e146c4fad86acf450b5a3a65836432242dc8ed0f63979f99d83c9"} Feb 19 05:25:05 crc kubenswrapper[5012]: I0219 05:25:05.714421 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aa6cb4994fce0e7fe7b6aa95c9d202e67465255f154aab24c4b4ccd3779b26d7"} Feb 19 05:25:05 crc kubenswrapper[5012]: W0219 05:25:05.730223 5012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Feb 19 05:25:05 crc kubenswrapper[5012]: E0219 05:25:05.730489 5012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Feb 19 05:25:06 crc kubenswrapper[5012]: E0219 05:25:06.034163 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="1.6s" Feb 19 05:25:06 crc kubenswrapper[5012]: W0219 05:25:06.130882 5012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Feb 19 05:25:06 crc kubenswrapper[5012]: E0219 05:25:06.131027 5012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.305341 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.307088 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.307154 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.307175 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.307223 5012 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 05:25:06 crc kubenswrapper[5012]: E0219 05:25:06.307844 5012 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.628446 5012 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.631617 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:27:58.654903075 +0000 UTC Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.709759 5012 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 05:25:06 crc kubenswrapper[5012]: E0219 05:25:06.711028 5012 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.719811 5012 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="947abd645c60c2b8d4e76b7607656fca7188b4d74c2f21b8c75b21f2ec3be6ba" exitCode=0 Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.719903 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"947abd645c60c2b8d4e76b7607656fca7188b4d74c2f21b8c75b21f2ec3be6ba"} Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.719981 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.721449 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.721515 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.721539 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.721721 5012 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4" exitCode=0 Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.721789 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4"} Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.721912 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.723141 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.723177 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.723192 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.725853 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d"} Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.725926 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07"} Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.725948 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583"} Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.729711 5012 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c" exitCode=0 Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.729764 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c"} Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.729847 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.731004 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.731039 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.731054 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.732772 5012 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5afa3a994c682051deccf2957ed92185e0476eaac836e8e8e2df4490d4390e01" exitCode=0 Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.732819 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5afa3a994c682051deccf2957ed92185e0476eaac836e8e8e2df4490d4390e01"} Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.732894 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.732927 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.739601 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.739608 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.739637 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.739650 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.739657 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:06 crc kubenswrapper[5012]: I0219 05:25:06.739679 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:07 crc kubenswrapper[5012]: W0219 05:25:07.382935 5012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Feb 19 05:25:07 crc kubenswrapper[5012]: E0219 05:25:07.383116 5012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.629082 5012 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.632338 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 19:05:06.179703869 +0000 UTC Feb 19 05:25:07 crc kubenswrapper[5012]: E0219 05:25:07.636176 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="3.2s" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.742397 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9"} Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.742521 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83"} Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.742618 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2"} Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.742639 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135"} Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.749094 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"71bdebd00e394dba09a5bec37deb99b932b89435297b440be4309a1f32a1537d"} Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.749238 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.750567 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.750613 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.750633 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.753478 5012 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d68244c4727f029062f490145b4653a0c06d1193d64ad899f9682bfc7c72021f" exitCode=0 Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.753557 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d68244c4727f029062f490145b4653a0c06d1193d64ad899f9682bfc7c72021f"} Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.753712 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.755111 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.755150 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.755168 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.759904 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5"} Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.759964 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9"} Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.759994 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8"} Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.760126 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.761849 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.761900 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.761918 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.768929 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b"} Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.769058 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.770140 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.770213 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.770234 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:07 crc kubenswrapper[5012]: W0219 05:25:07.777021 5012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Feb 19 05:25:07 crc kubenswrapper[5012]: E0219 05:25:07.777201 5012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Feb 19 05:25:07 crc kubenswrapper[5012]: W0219 05:25:07.808791 5012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.110:6443: connect: connection refused Feb 19 05:25:07 crc kubenswrapper[5012]: E0219 05:25:07.808919 5012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.110:6443: connect: connection refused" logger="UnhandledError" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.909427 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.911023 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.911077 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.911093 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:07 crc kubenswrapper[5012]: I0219 05:25:07.911137 5012 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 05:25:07 crc kubenswrapper[5012]: E0219 05:25:07.912254 5012 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.110:6443: connect: connection refused" node="crc" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.632720 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:17:56.36692917 +0000 UTC Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.779364 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de"} Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.779644 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.780921 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.780970 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.780985 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.783331 5012 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c90faa45a48a911f6a54c2b54f174d950d8bf7b6240f361a163dba728117831d" exitCode=0 Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.783384 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c90faa45a48a911f6a54c2b54f174d950d8bf7b6240f361a163dba728117831d"} Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.783479 5012 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.783498 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.783533 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.783560 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.783510 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.785405 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.785433 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.785448 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.786070 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.786092 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.786105 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.786161 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.786197 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.786203 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.786216 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.786242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:08 crc kubenswrapper[5012]: I0219 05:25:08.786261 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:09 crc kubenswrapper[5012]: I0219 05:25:09.416151 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:09 crc kubenswrapper[5012]: I0219 05:25:09.633730 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 15:37:14.48190694 +0000 UTC Feb 19 05:25:09 crc kubenswrapper[5012]: I0219 05:25:09.794189 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8709e4680cc10d1accdf040af0dbec8dfdd073c6c7ba8f10f5ceda323debc20f"} Feb 19 05:25:09 crc kubenswrapper[5012]: I0219 05:25:09.794279 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"70ce20c27b738f9e4f2cb4a95c8d9366876c041a34c3da156420b7c3cf7c4107"} Feb 19 05:25:09 crc kubenswrapper[5012]: I0219 05:25:09.794347 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"276f77869ae3299df00eaa9a2d0baca767f3da7bf3c5ea6e0aa35e10e41c1225"} Feb 19 05:25:09 crc kubenswrapper[5012]: I0219 05:25:09.794284 5012 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 05:25:09 crc kubenswrapper[5012]: I0219 05:25:09.794404 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:09 crc kubenswrapper[5012]: I0219 05:25:09.794446 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:09 crc kubenswrapper[5012]: I0219 05:25:09.796614 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:09 crc kubenswrapper[5012]: I0219 05:25:09.796675 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:09 crc kubenswrapper[5012]: I0219 05:25:09.796699 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:09 crc kubenswrapper[5012]: I0219 05:25:09.796845 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:09 crc kubenswrapper[5012]: I0219 05:25:09.796896 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:09 crc kubenswrapper[5012]: I0219 05:25:09.796919 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:10 crc kubenswrapper[5012]: I0219 05:25:10.611422 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:10 crc kubenswrapper[5012]: I0219 05:25:10.621763 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:10 crc kubenswrapper[5012]: I0219 05:25:10.633976 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:19:09.906178443 +0000 UTC Feb 19 05:25:10 crc kubenswrapper[5012]: I0219 05:25:10.804170 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4bbd9b867b5d076f57ac69807779191f0a64c7adf74157a3c7a9b7835342f28e"} Feb 19 05:25:10 crc kubenswrapper[5012]: I0219 05:25:10.804241 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1517e81dcb6ffb82302484ac8ba6fb7541ac60f35d4402506eace57e81424719"} Feb 19 05:25:10 crc kubenswrapper[5012]: I0219 05:25:10.804275 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:10 crc kubenswrapper[5012]: I0219 05:25:10.804361 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:10 crc kubenswrapper[5012]: I0219 05:25:10.804368 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:10 crc kubenswrapper[5012]: I0219 05:25:10.809639 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:10 crc kubenswrapper[5012]: I0219 05:25:10.809960 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:10 crc kubenswrapper[5012]: I0219 05:25:10.810050 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:10 crc kubenswrapper[5012]: I0219 05:25:10.811632 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:10 crc kubenswrapper[5012]: I0219 05:25:10.811700 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:10 crc kubenswrapper[5012]: I0219 05:25:10.811722 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:10 crc kubenswrapper[5012]: I0219 05:25:10.942175 5012 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.108639 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.108952 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.110862 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.110946 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.110988 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.113032 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.114762 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.114805 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.114821 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.114848 5012 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.347339 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.634089 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 16:57:33.97592753 +0000 UTC Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.806411 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.806456 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.808184 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.808286 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.808357 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.808371 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.808389 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:11 crc kubenswrapper[5012]: I0219 05:25:11.808393 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.016264 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.635365 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 03:51:03.219929429 +0000 UTC Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.635470 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.635779 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.637698 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.637757 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.637777 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.809840 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.809853 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.811847 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.811926 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.811867 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.812021 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.812049 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.811946 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.886768 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.887161 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.888747 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.888809 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:12 crc kubenswrapper[5012]: I0219 05:25:12.888828 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:13 crc kubenswrapper[5012]: I0219 05:25:13.636233 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:09:16.646021404 +0000 UTC Feb 19 05:25:14 crc kubenswrapper[5012]: I0219 05:25:14.348441 5012 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 05:25:14 crc kubenswrapper[5012]: I0219 05:25:14.348565 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 05:25:14 crc kubenswrapper[5012]: I0219 05:25:14.636979 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 11:22:17.885573798 +0000 UTC Feb 19 05:25:14 crc kubenswrapper[5012]: I0219 05:25:14.789476 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 05:25:14 crc kubenswrapper[5012]: I0219 05:25:14.789814 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:14 crc kubenswrapper[5012]: I0219 05:25:14.792403 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:14 crc kubenswrapper[5012]: I0219 05:25:14.792466 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:14 crc kubenswrapper[5012]: I0219 05:25:14.792487 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:14 crc kubenswrapper[5012]: E0219 05:25:14.804641 5012 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 05:25:15 crc kubenswrapper[5012]: I0219 05:25:15.361150 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 19 05:25:15 crc kubenswrapper[5012]: I0219 05:25:15.361508 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:15 crc kubenswrapper[5012]: I0219 05:25:15.364026 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:15 crc kubenswrapper[5012]: I0219 05:25:15.364076 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:15 crc kubenswrapper[5012]: I0219 05:25:15.364096 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:15 crc kubenswrapper[5012]: I0219 05:25:15.637983 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 18:06:20.770555153 +0000 UTC Feb 19 05:25:16 crc kubenswrapper[5012]: I0219 05:25:16.638118 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 22:36:52.118308237 +0000 UTC Feb 19 05:25:17 crc kubenswrapper[5012]: I0219 05:25:17.272413 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:17 crc kubenswrapper[5012]: I0219 05:25:17.272641 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:17 crc kubenswrapper[5012]: I0219 05:25:17.274451 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:17 crc kubenswrapper[5012]: I0219 05:25:17.274521 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:17 crc kubenswrapper[5012]: I0219 05:25:17.274537 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:17 crc kubenswrapper[5012]: I0219 05:25:17.639079 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 14:03:21.037889582 +0000 UTC Feb 19 05:25:18 crc kubenswrapper[5012]: I0219 05:25:18.628680 5012 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 19 05:25:18 crc kubenswrapper[5012]: I0219 05:25:18.640001 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 23:28:25.354237329 +0000 UTC Feb 19 05:25:18 crc kubenswrapper[5012]: W0219 05:25:18.781073 5012 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 19 05:25:18 crc kubenswrapper[5012]: I0219 05:25:18.781199 5012 trace.go:236] Trace[541594742]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 05:25:08.779) (total time: 10001ms): Feb 19 05:25:18 crc kubenswrapper[5012]: Trace[541594742]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (05:25:18.781) Feb 19 05:25:18 crc kubenswrapper[5012]: Trace[541594742]: [10.001379483s] [10.001379483s] END Feb 19 05:25:18 crc kubenswrapper[5012]: E0219 05:25:18.781232 5012 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 19 05:25:19 crc kubenswrapper[5012]: I0219 05:25:19.025663 5012 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Feb 19 05:25:19 crc kubenswrapper[5012]: I0219 05:25:19.025753 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 05:25:19 crc kubenswrapper[5012]: I0219 05:25:19.032941 5012 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]log ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]etcd ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/generic-apiserver-start-informers ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/priority-and-fairness-filter ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/start-apiextensions-informers ok Feb 19 05:25:19 crc kubenswrapper[5012]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 19 05:25:19 crc kubenswrapper[5012]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/start-system-namespaces-controller ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 19 05:25:19 crc kubenswrapper[5012]: [-]poststarthook/start-service-ip-repair-controllers failed: reason withheld Feb 19 05:25:19 crc kubenswrapper[5012]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 19 05:25:19 crc kubenswrapper[5012]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 19 05:25:19 crc kubenswrapper[5012]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Feb 19 05:25:19 crc kubenswrapper[5012]: [-]poststarthook/bootstrap-controller failed: reason withheld Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/start-kube-aggregator-informers ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 19 05:25:19 crc kubenswrapper[5012]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 19 05:25:19 crc kubenswrapper[5012]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]autoregister-completion ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/apiservice-openapi-controller ok Feb 19 05:25:19 crc kubenswrapper[5012]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 19 05:25:19 crc kubenswrapper[5012]: livez check failed Feb 19 05:25:19 crc kubenswrapper[5012]: I0219 05:25:19.033052 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:25:19 crc kubenswrapper[5012]: I0219 05:25:19.640608 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:57:46.074394759 +0000 UTC Feb 19 05:25:20 crc kubenswrapper[5012]: I0219 05:25:20.641650 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 13:32:53.495547936 +0000 UTC Feb 19 05:25:21 crc kubenswrapper[5012]: I0219 05:25:21.642353 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:11:31.521665551 +0000 UTC Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.055030 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.055286 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.056767 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.056812 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.056828 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.080002 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.643245 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 09:42:14.736295094 +0000 UTC Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.842976 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.844557 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.844632 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.844652 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.895681 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.895953 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.897737 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.897843 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.897864 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:22 crc kubenswrapper[5012]: I0219 05:25:22.902828 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:23 crc kubenswrapper[5012]: I0219 05:25:23.644123 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:03:33.247128866 +0000 UTC Feb 19 05:25:23 crc kubenswrapper[5012]: I0219 05:25:23.845921 5012 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 05:25:23 crc kubenswrapper[5012]: I0219 05:25:23.845987 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:23 crc kubenswrapper[5012]: I0219 05:25:23.847417 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:23 crc kubenswrapper[5012]: I0219 05:25:23.847485 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:23 crc kubenswrapper[5012]: I0219 05:25:23.847505 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.019152 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.020957 5012 trace.go:236] Trace[931944]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 05:25:10.673) (total time: 13347ms): Feb 19 05:25:24 crc kubenswrapper[5012]: Trace[931944]: ---"Objects listed" error: 13347ms (05:25:24.020) Feb 19 05:25:24 crc kubenswrapper[5012]: Trace[931944]: [13.347535624s] [13.347535624s] END Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.020986 5012 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.021427 5012 trace.go:236] Trace[1542930860]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 05:25:14.009) (total time: 10011ms): Feb 19 05:25:24 crc kubenswrapper[5012]: Trace[1542930860]: ---"Objects listed" error: 10011ms (05:25:24.021) Feb 19 05:25:24 crc kubenswrapper[5012]: Trace[1542930860]: [10.011647198s] [10.011647198s] END Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.021455 5012 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.021627 5012 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.022170 5012 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.022371 5012 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.034591 5012 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.055462 5012 csr.go:261] certificate signing request csr-rc45m is approved, waiting to be issued Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.059695 5012 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34834->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.059772 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34834->192.168.126.11:17697: read: connection reset by peer" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.060209 5012 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.060280 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.062274 5012 csr.go:257] certificate signing request csr-rc45m is issued Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.256375 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.264157 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.479338 5012 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 05:25:24 crc kubenswrapper[5012]: W0219 05:25:24.480000 5012 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 19 05:25:24 crc kubenswrapper[5012]: W0219 05:25:24.480007 5012 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 19 05:25:24 crc kubenswrapper[5012]: W0219 05:25:24.480027 5012 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.480018 5012 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.110:52434->38.102.83.110:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18958e7f29d2c0d9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 05:25:05.253826777 +0000 UTC m=+1.287149386,LastTimestamp:2026-02-19 05:25:05.253826777 +0000 UTC m=+1.287149386,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.625050 5012 apiserver.go:52] "Watching apiserver" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.634284 5012 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.634865 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.635599 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.635619 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.635640 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.635741 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.635889 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.636173 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.636278 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.636288 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.636511 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.638369 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.639002 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.639226 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.639462 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.639734 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.639902 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.640074 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.640194 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.641034 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.644715 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:25:00.182369389 +0000 UTC Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.677202 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.698099 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.724659 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.732026 5012 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.740280 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.756905 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.768176 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.786968 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.800257 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.811816 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827031 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827100 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827140 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827185 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827225 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827259 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827297 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827360 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827396 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827432 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827540 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827571 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827654 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827687 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827721 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827911 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827959 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827999 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828032 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828063 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828095 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828135 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828193 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828228 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828260 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828293 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828353 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828385 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828423 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828458 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828492 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828522 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828565 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828554 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828703 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828715 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828736 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828746 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828820 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828869 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828939 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828978 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829016 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829053 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829088 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829125 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829162 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829197 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829236 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829248 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829270 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829362 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829403 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829452 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829434 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829508 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829585 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829619 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829648 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829673 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829697 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829724 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829751 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829775 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829798 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829829 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829856 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829876 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.829895 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:25:25.329865966 +0000 UTC m=+21.363188545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829915 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829924 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829939 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829963 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829998 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830026 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830052 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830035 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830077 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830105 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830132 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830159 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830185 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830211 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830239 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830251 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830263 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830262 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830337 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830363 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830363 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830389 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830417 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830447 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830474 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830498 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830553 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830572 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830559 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830669 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830715 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830757 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830794 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830837 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830875 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830926 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830965 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831005 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831046 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831084 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831121 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831160 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831196 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831235 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831273 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831340 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831382 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831418 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831456 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831491 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831530 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831578 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831616 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831655 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831697 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831734 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831772 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831808 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831845 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832142 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832637 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832638 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832632 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831886 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832808 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832826 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832868 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832903 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832941 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832976 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833012 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833052 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833088 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833124 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833159 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833194 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833237 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833272 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833335 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833375 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833413 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833448 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833486 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833523 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833564 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833602 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833641 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833679 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833717 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833754 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833792 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833829 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833868 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833905 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833943 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833979 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834016 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834061 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834099 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834136 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834172 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834209 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834246 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834282 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834358 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834397 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834436 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834471 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834515 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834553 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834592 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834631 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834668 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834708 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834767 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834802 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834842 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834878 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834915 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834954 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834994 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835030 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835068 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835105 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835144 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835182 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835226 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835264 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835327 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835366 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835402 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835440 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835477 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835517 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835554 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835591 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835635 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835674 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835714 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835752 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835796 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835833 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835871 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835908 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835955 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836002 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836095 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836198 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836246 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836289 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836521 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836564 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836539 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.840219 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832804 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.841956 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833002 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833124 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833272 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833339 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833789 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835034 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835520 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835581 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835715 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835957 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836231 5012 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836408 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836530 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836550 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836932 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.837295 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.837328 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.837293 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.837577 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.837598 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.837611 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.837936 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.837951 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.838341 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.838525 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.838508 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.838804 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.839128 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.839129 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.839091 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.839549 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.839601 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.839955 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.839984 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.841013 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.841107 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.841336 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.841446 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.841663 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.842899 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.843206 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.843526 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.843557 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.843673 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.844018 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.844251 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.844370 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.844495 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.844573 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.844641 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845144 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845687 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.841040 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845758 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845796 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845823 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845911 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845938 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845959 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845981 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845974 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846011 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846023 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846064 5012 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846115 5012 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846153 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846182 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846259 5012 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846281 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846326 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846350 5012 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846370 5012 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846402 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846415 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846414 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846781 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846962 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.847009 5012 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.847056 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:25.347041904 +0000 UTC m=+21.380364473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.847256 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.847440 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.847599 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.848736 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.849507 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.849708 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.850222 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.850733 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.850776 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.850906 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.851564 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.851765 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.851799 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.851836 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.851904 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.852041 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.852480 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.852652 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.853088 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.853139 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.853107 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.853616 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.853657 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.853887 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.854085 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.854747 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.854771 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.855181 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.855589 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.855997 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.856030 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.855932 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.856623 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.859578 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.859628 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.859805 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.859985 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.860401 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.860489 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.860351 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.860547 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.860638 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.860883 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.860893 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.861278 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.862534 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.865058 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.866993 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.865180 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.865241 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.865320 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.865691 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.865770 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.866008 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.866723 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846421 5012 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867267 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867288 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867350 5012 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867403 5012 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867426 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867445 5012 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867461 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867481 5012 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867497 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867514 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867529 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867549 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867563 5012 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867577 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867591 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867609 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867623 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867638 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867657 5012 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867670 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867683 5012 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867697 5012 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867715 5012 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867730 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867743 5012 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867757 5012 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867774 5012 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867786 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867799 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.868619 5012 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.868891 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.869448 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.869591 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.869828 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.869939 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.869996 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.870215 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.870035 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.875013 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.870745 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.871028 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.871126 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.871388 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.871435 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.871878 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.871996 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.872445 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.872537 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.872530 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.872950 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.873035 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.873360 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.873720 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.873717 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.873598 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.874012 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.875184 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.874346 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.874421 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.874705 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.874826 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.874860 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.874898 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.875169 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.875447 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.875654 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.875489 5012 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.877169 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.877445 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.877464 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.877655 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.877688 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.877703 5012 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.877756 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.877820 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:25.377777269 +0000 UTC m=+21.411099838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.877929 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.877942 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.877951 5012 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.878012 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:25.378004665 +0000 UTC m=+21.411327234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.877953 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.875783 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.875920 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.876179 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.878075 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.876388 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.876408 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.878199 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.878454 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.878902 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.878945 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.879398 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.875786 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.879687 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.883540 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:25.383514512 +0000 UTC m=+21.416837081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.891363 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.900106 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.901358 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.901968 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.902673 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.907351 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.910583 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.916714 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.916922 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.932788 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.936838 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de"} Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.936799 5012 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de" exitCode=255 Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.937162 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.948709 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.960148 5012 scope.go:117] "RemoveContainer" containerID="093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.963450 5012 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.963579 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.968648 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.968928 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.968881 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.968994 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969034 5012 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969043 5012 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969053 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969061 5012 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969070 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969079 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969087 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969096 5012 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969105 5012 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969114 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969124 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969133 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969141 5012 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969149 5012 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969177 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969186 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969195 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969204 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969212 5012 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969221 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969230 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969238 5012 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969246 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969255 5012 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969264 5012 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969273 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969283 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969292 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969317 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969326 5012 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969335 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969343 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969350 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969358 5012 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969367 5012 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969374 5012 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969383 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969457 5012 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969466 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969476 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969503 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969524 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969532 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969540 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969549 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969557 5012 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969565 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969574 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969582 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969591 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969598 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969606 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969614 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969622 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969630 5012 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969638 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969646 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969653 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969662 5012 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969669 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969677 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969685 5012 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969693 5012 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969700 5012 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969708 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969716 5012 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969736 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969743 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969751 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969759 5012 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969768 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969775 5012 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969783 5012 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969791 5012 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969798 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969806 5012 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969815 5012 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969836 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969844 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969851 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969858 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969866 5012 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969874 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969882 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969889 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969896 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969904 5012 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969912 5012 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969920 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969927 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969935 5012 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969944 5012 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969951 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969958 5012 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969967 5012 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969975 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969984 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969991 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969999 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970007 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970015 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970025 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970034 5012 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970041 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970049 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970056 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970064 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970072 5012 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970080 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970088 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970098 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970106 5012 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970114 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970122 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970130 5012 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970138 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970146 5012 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970154 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970162 5012 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970170 5012 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970178 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970186 5012 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970194 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970202 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970210 5012 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970218 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970225 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970235 5012 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970244 5012 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970253 5012 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970261 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970223 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970269 5012 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971439 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971750 5012 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971786 5012 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971796 5012 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971807 5012 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971818 5012 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971835 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971844 5012 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971852 5012 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971860 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971869 5012 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971880 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971890 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971901 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971910 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971918 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971927 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971935 5012 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971944 5012 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971952 5012 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971961 5012 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971970 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971980 5012 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.973792 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.974994 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.978183 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.984815 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.986504 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.998689 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:25 crc kubenswrapper[5012]: W0219 05:25:25.010331 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-be8d521325336e805f14cb4d9074867f74fa950ad1d1d87190fd2f52ed2feb0a WatchSource:0}: Error finding container be8d521325336e805f14cb4d9074867f74fa950ad1d1d87190fd2f52ed2feb0a: Status 404 returned error can't find the container with id be8d521325336e805f14cb4d9074867f74fa950ad1d1d87190fd2f52ed2feb0a Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.011414 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.024884 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.040995 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.051572 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.061957 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.063615 5012 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-19 05:20:24 +0000 UTC, rotation deadline is 2026-12-24 06:40:52.68120113 +0000 UTC Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.063674 5012 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7393h15m27.617530183s for next certificate rotation Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.072852 5012 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.072881 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.074661 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.090947 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.177933 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.257582 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.375390 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.375571 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:25:26.375542721 +0000 UTC m=+22.408865290 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.376012 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.376149 5012 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.376207 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:26.376199387 +0000 UTC m=+22.409521956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.476734 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.476779 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.476829 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.476944 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.476946 5012 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.477038 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:26.477017637 +0000 UTC m=+22.510340226 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.476961 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.477073 5012 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.477066 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.477110 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.477128 5012 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.477113 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:26.477103229 +0000 UTC m=+22.510425798 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.477276 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:26.477218982 +0000 UTC m=+22.510541551 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.645891 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 23:06:42.386100089 +0000 UTC Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.776247 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-4cs9h"] Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.776764 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4cs9h" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.782234 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.782297 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.782493 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.800368 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5lt44"] Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.800911 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.802395 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.804915 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.804986 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.804915 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.805259 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.805201 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.820027 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.838668 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.870470 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.882619 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c6kt\" (UniqueName: \"kubernetes.io/projected/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-kube-api-access-5c6kt\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.882666 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2sbf\" (UniqueName: \"kubernetes.io/projected/93b25601-4740-4c9d-9e62-0e7566484633-kube-api-access-r2sbf\") pod \"node-resolver-4cs9h\" (UID: \"93b25601-4740-4c9d-9e62-0e7566484633\") " pod="openshift-dns/node-resolver-4cs9h" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.882703 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/93b25601-4740-4c9d-9e62-0e7566484633-hosts-file\") pod \"node-resolver-4cs9h\" (UID: \"93b25601-4740-4c9d-9e62-0e7566484633\") " pod="openshift-dns/node-resolver-4cs9h" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.882723 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-proxy-tls\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.882812 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-mcd-auth-proxy-config\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.882832 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-rootfs\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.890385 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.910912 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.929349 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.945855 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"be8d521325336e805f14cb4d9074867f74fa950ad1d1d87190fd2f52ed2feb0a"} Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.947583 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350"} Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.947607 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5"} Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.947616 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"524e62f35d3a952d5a77bf2eb5da277810458a7ad787eebba44e43e3f108b4bd"} Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.949499 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.951329 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0"} Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.952050 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.953596 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d"} Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.953624 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"126647a0fe7aa63aac5402eeaf92a9fcb8fd378d8fb027865898010033040917"} Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.957437 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.972123 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.983285 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/93b25601-4740-4c9d-9e62-0e7566484633-hosts-file\") pod \"node-resolver-4cs9h\" (UID: \"93b25601-4740-4c9d-9e62-0e7566484633\") " pod="openshift-dns/node-resolver-4cs9h" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.983353 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-proxy-tls\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.983385 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-mcd-auth-proxy-config\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.983413 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-rootfs\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.983443 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c6kt\" (UniqueName: \"kubernetes.io/projected/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-kube-api-access-5c6kt\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.983386 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/93b25601-4740-4c9d-9e62-0e7566484633-hosts-file\") pod \"node-resolver-4cs9h\" (UID: \"93b25601-4740-4c9d-9e62-0e7566484633\") " pod="openshift-dns/node-resolver-4cs9h" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.983465 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2sbf\" (UniqueName: \"kubernetes.io/projected/93b25601-4740-4c9d-9e62-0e7566484633-kube-api-access-r2sbf\") pod \"node-resolver-4cs9h\" (UID: \"93b25601-4740-4c9d-9e62-0e7566484633\") " pod="openshift-dns/node-resolver-4cs9h" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.983627 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-rootfs\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.984005 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-mcd-auth-proxy-config\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.987219 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.990733 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-proxy-tls\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.023069 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.029033 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2sbf\" (UniqueName: \"kubernetes.io/projected/93b25601-4740-4c9d-9e62-0e7566484633-kube-api-access-r2sbf\") pod \"node-resolver-4cs9h\" (UID: \"93b25601-4740-4c9d-9e62-0e7566484633\") " pod="openshift-dns/node-resolver-4cs9h" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.039295 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c6kt\" (UniqueName: \"kubernetes.io/projected/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-kube-api-access-5c6kt\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.047164 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.067807 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.091224 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.093244 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4cs9h" Feb 19 05:25:26 crc kubenswrapper[5012]: W0219 05:25:26.105007 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93b25601_4740_4c9d_9e62_0e7566484633.slice/crio-e84114dbdef9b1d4228b5db4ca491ad5449687eaf133c1e4370be544b7ff2c61 WatchSource:0}: Error finding container e84114dbdef9b1d4228b5db4ca491ad5449687eaf133c1e4370be544b7ff2c61: Status 404 returned error can't find the container with id e84114dbdef9b1d4228b5db4ca491ad5449687eaf133c1e4370be544b7ff2c61 Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.117719 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.121249 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: W0219 05:25:26.128532 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf72c12f8_ba8a_4e43_aba7_f3c31a59181a.slice/crio-97b1b7f1472e272dc53918364c6c295394ca63cce7cb4adab3c71102c1375740 WatchSource:0}: Error finding container 97b1b7f1472e272dc53918364c6c295394ca63cce7cb4adab3c71102c1375740: Status 404 returned error can't find the container with id 97b1b7f1472e272dc53918364c6c295394ca63cce7cb4adab3c71102c1375740 Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.149152 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.171119 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.191349 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.209624 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.388123 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.388389 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:25:28.388341044 +0000 UTC m=+24.421663803 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.388884 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.389048 5012 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.389115 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:28.389106333 +0000 UTC m=+24.422428902 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.489825 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.489902 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.489935 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490064 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490097 5012 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490163 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490175 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490183 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:28.490160259 +0000 UTC m=+24.523482828 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490192 5012 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490263 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:28.490240501 +0000 UTC m=+24.523563070 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490106 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490284 5012 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490323 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:28.490316683 +0000 UTC m=+24.523639252 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.646249 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:52:54.915871811 +0000 UTC Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.700910 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lkrsg"] Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.701239 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.701824 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.701898 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.702173 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.702206 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.702227 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.702427 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.704541 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.704865 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.705059 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.705178 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.706569 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.717265 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.718113 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.719237 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.719881 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.720868 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.721402 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.721970 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.722863 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.722955 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.723947 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.724865 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.725390 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.726488 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.727036 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.727548 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.728436 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.728919 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.729894 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.730373 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.730901 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.731859 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.732330 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.733225 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.733650 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.734667 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.735103 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.735693 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.736704 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.737153 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.738230 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.738716 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.739553 5012 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.739654 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.741372 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.742252 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.742709 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.743282 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.744131 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.744920 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.745793 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.746446 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.747429 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.747899 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.748836 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.749464 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.750416 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.750889 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.751744 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.752249 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.753352 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.753846 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.754726 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.755159 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.756047 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.756599 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.757030 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.757914 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-wv2tq"] Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.758621 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.760901 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.762101 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.763816 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793169 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7a04e36-fbaa-4de1-871a-7225433eebb0-cni-binary-copy\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793210 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-daemon-config\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793234 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-system-cni-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793267 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-cni-multus\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793286 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-hostroot\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793322 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-cnibin\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793341 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-k8s-cni-cncf-io\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793404 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-etc-kubernetes\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793452 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwlt7\" (UniqueName: \"kubernetes.io/projected/e7a04e36-fbaa-4de1-871a-7225433eebb0-kube-api-access-nwlt7\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793498 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-netns\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793518 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-kubelet\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793535 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-multus-certs\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793569 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-os-release\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793590 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-socket-dir-parent\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793605 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-conf-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793627 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-cni-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793645 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-cni-bin\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.795581 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.817293 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.834069 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.852538 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.871963 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.894897 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-cni-bin\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.894959 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6f3af476-577a-46f9-a71c-60fab8fdaa68-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.894996 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7a04e36-fbaa-4de1-871a-7225433eebb0-cni-binary-copy\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895038 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-daemon-config\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895069 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-system-cni-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895114 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-cni-multus\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895125 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-cni-bin\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895251 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-cni-multus\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895158 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-cnibin\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895280 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-cnibin\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895411 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-k8s-cni-cncf-io\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895417 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-system-cni-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895484 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-etc-kubernetes\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895439 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-etc-kubernetes\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895513 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-k8s-cni-cncf-io\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895553 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f3af476-577a-46f9-a71c-60fab8fdaa68-cni-binary-copy\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895605 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-netns\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895643 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895677 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94dgd\" (UniqueName: \"kubernetes.io/projected/6f3af476-577a-46f9-a71c-60fab8fdaa68-kube-api-access-94dgd\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895715 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-netns\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895720 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-socket-dir-parent\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895799 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-cni-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895827 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-daemon-config\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895857 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-cnibin\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895895 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-hostroot\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895973 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-system-cni-dir\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896009 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-os-release\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896020 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-socket-dir-parent\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896045 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwlt7\" (UniqueName: \"kubernetes.io/projected/e7a04e36-fbaa-4de1-871a-7225433eebb0-kube-api-access-nwlt7\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896023 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-hostroot\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896109 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-kubelet\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896139 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-cni-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896151 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-multus-certs\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896178 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-os-release\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896193 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-kubelet\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896211 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-conf-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896265 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-multus-certs\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896238 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-conf-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896355 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7a04e36-fbaa-4de1-871a-7225433eebb0-cni-binary-copy\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896545 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-os-release\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.900190 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.915596 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.932189 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.933129 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwlt7\" (UniqueName: \"kubernetes.io/projected/e7a04e36-fbaa-4de1-871a-7225433eebb0-kube-api-access-nwlt7\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.955091 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.967164 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2"} Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.967234 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049"} Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.967256 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"97b1b7f1472e272dc53918364c6c295394ca63cce7cb4adab3c71102c1375740"} Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.970277 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4cs9h" event={"ID":"93b25601-4740-4c9d-9e62-0e7566484633","Type":"ContainerStarted","Data":"2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe"} Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.970380 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4cs9h" event={"ID":"93b25601-4740-4c9d-9e62-0e7566484633","Type":"ContainerStarted","Data":"e84114dbdef9b1d4228b5db4ca491ad5449687eaf133c1e4370be544b7ff2c61"} Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.983160 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997090 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-cnibin\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997163 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-system-cni-dir\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997201 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-os-release\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997261 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6f3af476-577a-46f9-a71c-60fab8fdaa68-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997269 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-system-cni-dir\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997269 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-cnibin\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997385 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-os-release\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997390 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f3af476-577a-46f9-a71c-60fab8fdaa68-cni-binary-copy\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997524 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997561 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94dgd\" (UniqueName: \"kubernetes.io/projected/6f3af476-577a-46f9-a71c-60fab8fdaa68-kube-api-access-94dgd\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.998087 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.998449 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.998665 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f3af476-577a-46f9-a71c-60fab8fdaa68-cni-binary-copy\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.999470 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6f3af476-577a-46f9-a71c-60fab8fdaa68-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.013059 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.016976 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94dgd\" (UniqueName: \"kubernetes.io/projected/6f3af476-577a-46f9-a71c-60fab8fdaa68-kube-api-access-94dgd\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.018844 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lkrsg" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.027732 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: W0219 05:25:27.041689 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7a04e36_fbaa_4de1_871a_7225433eebb0.slice/crio-8b4133672645efb3ebea44ed010f4748c5a8fb90bf6471cf32a9fe9215d736b0 WatchSource:0}: Error finding container 8b4133672645efb3ebea44ed010f4748c5a8fb90bf6471cf32a9fe9215d736b0: Status 404 returned error can't find the container with id 8b4133672645efb3ebea44ed010f4748c5a8fb90bf6471cf32a9fe9215d736b0 Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.048734 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.078485 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.110403 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.113200 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8ff9w"] Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.113967 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.118680 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.118819 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.119029 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.119119 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.122813 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.122957 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.123107 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.142577 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.160809 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.193267 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.202189 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovn-node-metrics-cert\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.202238 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-node-log\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.202260 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-slash\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203135 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-netns\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203187 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-ovn\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203213 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-config\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203254 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-var-lib-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203292 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203351 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj2rz\" (UniqueName: \"kubernetes.io/projected/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-kube-api-access-sj2rz\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203394 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-systemd\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203423 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-netd\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203457 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-ovn-kubernetes\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203489 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-env-overrides\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203509 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-script-lib\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203539 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-bin\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203566 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-kubelet\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203600 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-etc-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203622 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-systemd-units\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203656 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-log-socket\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203680 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.207770 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.230851 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.242837 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.257465 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.274360 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.284960 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.299886 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.304929 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-systemd-units\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.304993 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-log-socket\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305026 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305041 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-systemd-units\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305054 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovn-node-metrics-cert\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305073 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-node-log\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305091 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305094 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-slash\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305130 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-ovn\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305135 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-slash\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305150 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-config\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305177 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-node-log\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305184 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-netns\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305206 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-ovn\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305210 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305232 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-var-lib-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305251 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj2rz\" (UniqueName: \"kubernetes.io/projected/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-kube-api-access-sj2rz\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305269 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-systemd\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305310 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-netd\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305283 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-log-socket\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305328 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-script-lib\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305469 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-ovn-kubernetes\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305499 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-env-overrides\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305551 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-systemd\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305591 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-bin\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305557 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-bin\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305648 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-kubelet\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305675 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-etc-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305682 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-netns\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305718 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305749 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-etc-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305767 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-kubelet\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305702 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-netd\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305628 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-ovn-kubernetes\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305668 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-var-lib-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.307220 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-config\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.307526 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-script-lib\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.307558 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-env-overrides\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.315978 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.321436 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovn-node-metrics-cert\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.324229 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj2rz\" (UniqueName: \"kubernetes.io/projected/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-kube-api-access-sj2rz\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.335346 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.352490 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.381224 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.397759 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.409502 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.422986 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.438406 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.445934 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.646682 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:49:27.580680535 +0000 UTC Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.975528 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691"} Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.977415 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c" exitCode=0 Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.977487 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.977519 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"6412d35e0c37d9d105ee4ca82031f54078f7add4cd5d9abd98a4a8c14bd96adb"} Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.982013 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerStarted","Data":"904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7"} Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.982051 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerStarted","Data":"3efd5bcfe001b500cc1296a01d9d80ac1878ab69cd10be70e5906643b9c996bc"} Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.985470 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lkrsg" event={"ID":"e7a04e36-fbaa-4de1-871a-7225433eebb0","Type":"ContainerStarted","Data":"10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061"} Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.985538 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lkrsg" event={"ID":"e7a04e36-fbaa-4de1-871a-7225433eebb0","Type":"ContainerStarted","Data":"8b4133672645efb3ebea44ed010f4748c5a8fb90bf6471cf32a9fe9215d736b0"} Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.005117 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.021728 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.041025 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.060587 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.072447 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.089389 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.100520 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.113011 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.129955 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.146226 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.165625 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.182781 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.197945 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.215601 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.233738 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.258651 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.283241 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.298698 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.321589 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.335731 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.348711 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.382543 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.407702 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.424898 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.425074 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.425209 5012 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.425268 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:32.425251763 +0000 UTC m=+28.458574332 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.425362 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:25:32.425352296 +0000 UTC m=+28.458674865 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.451970 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.467742 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.479924 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.525577 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.525635 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.525659 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.525788 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.525808 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.525819 5012 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.525862 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:32.525848988 +0000 UTC m=+28.559171557 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.525922 5012 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.526049 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.526093 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:32.526066873 +0000 UTC m=+28.559389442 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.526103 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.526123 5012 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.526205 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:32.526181446 +0000 UTC m=+28.559504015 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.648227 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 18:50:33.842189684 +0000 UTC Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.702129 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.702163 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.702342 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.702425 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.702677 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.702578 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.993102 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.993232 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.993269 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.993297 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.993369 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.993395 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.995115 5012 generic.go:334] "Generic (PLEG): container finished" podID="6f3af476-577a-46f9-a71c-60fab8fdaa68" containerID="904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7" exitCode=0 Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.995237 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerDied","Data":"904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7"} Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.012197 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.027162 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.049087 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.073391 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.089181 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.103823 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.116598 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.133079 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.147905 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.163138 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.182284 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.197664 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.223449 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.649357 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 05:50:02.458686832 +0000 UTC Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.001390 5012 generic.go:334] "Generic (PLEG): container finished" podID="6f3af476-577a-46f9-a71c-60fab8fdaa68" containerID="5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9" exitCode=0 Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.001431 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerDied","Data":"5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9"} Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.027567 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.046715 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.066691 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.085914 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.105869 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.123013 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.140499 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.163155 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.178811 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.195859 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.212451 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.232424 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.251213 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.422606 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.426433 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.426488 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.426504 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.426689 5012 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.437468 5012 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.437930 5012 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.439197 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.439240 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.439255 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.439274 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.439290 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.459269 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.464138 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.464177 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.464190 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.464203 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.464216 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.482964 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.488480 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.488539 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.488557 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.488582 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.488597 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.507502 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.516999 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.517064 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.517081 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.517108 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.517131 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.539446 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.545080 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.545144 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.545163 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.545195 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.545217 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.562793 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.563126 5012 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.565576 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.565722 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.565831 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.565926 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.566023 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.650940 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:08:07.524514588 +0000 UTC Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.669955 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.670012 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.670047 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.670073 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.670093 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.702793 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.702865 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.702953 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.703088 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.702793 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.703273 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.773836 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.773894 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.773941 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.773972 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.773996 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.877637 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.877671 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.877682 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.877714 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.877729 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.981994 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.982063 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.982102 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.982134 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.982158 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.010563 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.014415 5012 generic.go:334] "Generic (PLEG): container finished" podID="6f3af476-577a-46f9-a71c-60fab8fdaa68" containerID="fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e" exitCode=0 Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.014484 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerDied","Data":"fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.081382 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.084427 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.084458 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.084471 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.084492 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.084505 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.098208 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.120141 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.135720 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.148722 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.161836 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.174181 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.186888 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.186921 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.186930 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.186948 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.186959 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.194412 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.205864 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.215431 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.227161 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.237216 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.251846 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.292277 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.292386 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.292456 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.292489 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.292509 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.397771 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.397846 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.397867 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.397894 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.397914 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.501140 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.501205 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.501222 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.501260 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.501280 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.520195 5012 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.604752 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.604815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.604833 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.604861 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.604881 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.651510 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:16:22.346602674 +0000 UTC Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.709784 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.709841 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.709860 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.709886 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.709905 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.812790 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.812852 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.812869 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.812898 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.812919 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.915549 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.915626 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.915653 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.915684 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.915710 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.019158 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.019218 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.019238 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.019266 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.019286 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.023203 5012 generic.go:334] "Generic (PLEG): container finished" podID="6f3af476-577a-46f9-a71c-60fab8fdaa68" containerID="907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827" exitCode=0 Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.023264 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerDied","Data":"907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.047085 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.067553 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.085858 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.106254 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.123013 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.123862 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.123902 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.123917 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.123937 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.123950 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.145202 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.172773 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.199197 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.219074 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.227019 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.227049 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.227060 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.227077 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.227090 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.250152 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.269374 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.284412 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.299884 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.330783 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.330833 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.330849 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.330875 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.330922 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.433975 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.434022 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.434032 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.434049 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.434063 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.472509 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.472648 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.472765 5012 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.472814 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:25:40.472754246 +0000 UTC m=+36.506076865 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.472875 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:40.472854829 +0000 UTC m=+36.506177438 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.537824 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.537916 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.537942 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.537975 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.537999 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.573877 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.573997 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.574056 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574184 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574261 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574266 5012 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574280 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574359 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574382 5012 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574404 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:40.574380786 +0000 UTC m=+36.607703395 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574458 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:40.574433358 +0000 UTC m=+36.607755957 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574291 5012 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574602 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:40.574565661 +0000 UTC m=+36.607888270 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.641937 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.642097 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.642129 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.642205 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.642229 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.651633 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:41:31.000674437 +0000 UTC Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.702519 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.702574 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.702758 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.702825 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.703035 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.703363 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.746273 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.746371 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.746393 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.746419 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.746439 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.849497 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.849586 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.849611 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.849650 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.849671 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.952829 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.952920 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.952939 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.952968 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.952987 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.029026 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerStarted","Data":"b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.042626 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.056963 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.057010 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.057022 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.057045 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.057058 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.058529 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.087273 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.108382 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.124842 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.144627 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.161443 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.161511 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.161530 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.161560 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.161583 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.164631 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.185597 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.204928 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.225934 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.248872 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.264923 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.265294 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.265333 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.265343 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.265360 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.265371 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.284960 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.370212 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.370266 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.370281 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.370322 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.370339 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.473616 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.474090 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.474114 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.474151 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.474170 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.577001 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.577051 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.577063 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.577080 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.577093 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.652222 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:18:58.169749953 +0000 UTC Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.680499 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.680551 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.680570 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.680597 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.680615 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.784759 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.784812 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.784826 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.784851 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.784866 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.887718 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.887765 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.887779 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.887797 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.887811 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.991623 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.991689 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.991708 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.991736 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.991757 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.038528 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.039753 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.039807 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.039823 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.051660 5012 generic.go:334] "Generic (PLEG): container finished" podID="6f3af476-577a-46f9-a71c-60fab8fdaa68" containerID="b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68" exitCode=0 Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.051721 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerDied","Data":"b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.065225 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.081980 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.082874 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.082868 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.095644 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.095690 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.095701 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.095721 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.095735 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.107502 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.121503 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.135661 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.152645 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.169510 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.185187 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.199010 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.199068 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.199079 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.199097 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.199166 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.200580 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.216747 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.231107 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.251617 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.264571 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.278384 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.296444 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.301727 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.301784 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.301802 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.301828 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.301847 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.313653 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.335397 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.347215 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.360198 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.373671 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.394145 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.405395 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.405438 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.405451 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.405474 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.405492 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.408203 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.421884 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.440385 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.457604 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.469930 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.509389 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.509445 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.509466 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.509492 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.509514 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.612611 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.612858 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.612919 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.613011 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.613102 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.653233 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 10:10:50.538376142 +0000 UTC Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.693935 5012 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.702383 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.702407 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:34 crc kubenswrapper[5012]: E0219 05:25:34.702887 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.703041 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:34 crc kubenswrapper[5012]: E0219 05:25:34.702902 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:34 crc kubenswrapper[5012]: E0219 05:25:34.703229 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.716001 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.716382 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.716546 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.716733 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.716932 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.723180 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.744643 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.763974 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.788121 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.813277 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.819555 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.819615 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.819634 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.819659 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.819678 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.834761 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.868190 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.889062 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.904732 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.918952 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.923148 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.923205 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.923220 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.923242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.923258 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.939975 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.956298 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.975034 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.026281 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.026351 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.026368 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.026386 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.026401 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.062964 5012 generic.go:334] "Generic (PLEG): container finished" podID="6f3af476-577a-46f9-a71c-60fab8fdaa68" containerID="0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a" exitCode=0 Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.063033 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerDied","Data":"0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.082276 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.105359 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.123647 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.129420 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.129486 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.129509 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.129538 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.129559 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.144813 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.162391 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.176677 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.193962 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.211088 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.232508 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.232557 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.232576 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.232600 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.232617 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.238734 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.259706 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.276398 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.293578 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.316053 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.336956 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.337028 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.337047 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.337078 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.337097 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.439666 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.439700 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.439728 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.439743 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.439754 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.542653 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.542682 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.542690 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.542703 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.542712 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.645328 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.645363 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.645371 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.645393 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.645402 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.653771 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:35:44.8451045 +0000 UTC Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.748570 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.748600 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.748611 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.748624 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.748634 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.854607 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.854675 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.854693 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.854719 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.854738 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.958672 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.958726 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.958739 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.958760 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.958774 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.061895 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.061956 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.061976 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.062003 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.062018 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.073619 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerStarted","Data":"0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.095573 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.110664 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.131375 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.148227 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.163935 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.165619 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.165672 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.165684 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.165710 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.165727 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.187009 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.205271 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.242116 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.267026 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.269182 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.269213 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.269229 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.269253 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.269271 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.289661 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.320445 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.353967 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.371618 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.371660 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.371670 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.371687 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.371699 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.388101 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.474718 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.474762 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.474772 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.474789 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.474799 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.579076 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.579140 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.579159 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.579184 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.579202 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.654148 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 00:43:16.147267767 +0000 UTC Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.683013 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.683070 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.683089 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.683117 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.683138 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.686545 5012 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.702750 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.702785 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:36 crc kubenswrapper[5012]: E0219 05:25:36.702909 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.702971 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:36 crc kubenswrapper[5012]: E0219 05:25:36.703128 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:36 crc kubenswrapper[5012]: E0219 05:25:36.703250 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.786438 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.786491 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.786511 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.786537 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.786557 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.890327 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.890388 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.890411 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.890464 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.890485 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.992929 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.992974 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.992995 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.993019 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.993038 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.079709 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/0.log" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.083900 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b" exitCode=1 Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.083962 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.085135 5012 scope.go:117] "RemoveContainer" containerID="7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.096377 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.096430 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.096447 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.096477 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.096496 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.111089 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.136112 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.157327 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.182163 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.201025 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.201083 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.201105 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.201134 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.201152 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.206502 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.239449 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"message\\\":\\\"handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:36.450683 6353 factory.go:656] Stopping watch factory\\\\nI0219 05:25:36.450684 6353 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:36.450691 6353 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:36.450703 6353 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:36.450745 6353 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.450824 6353 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451051 6353 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451689 6353 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:36.451824 6353 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451966 6353 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.452179 6353 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.257108 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.270442 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.288053 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.309663 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.309751 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.309766 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.309803 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.309814 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.317405 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.333904 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.350048 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.366537 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.413246 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.413373 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.413392 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.413422 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.413445 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.516719 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.516766 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.516780 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.516799 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.516811 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.620136 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.620205 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.620223 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.620646 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.620702 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.654519 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:51:14.96217822 +0000 UTC Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.724215 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.724259 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.724269 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.724287 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.724311 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.827622 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.827668 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.827677 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.827692 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.827704 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.930819 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.930898 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.930922 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.930962 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.930982 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.034256 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.034331 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.034344 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.034399 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.034416 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.091840 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/0.log" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.096045 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.096611 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.118230 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.137025 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.137079 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.137091 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.137115 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.137131 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.137284 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.159352 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.181859 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.207696 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.232242 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.240820 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.240855 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.240866 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.240885 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.240898 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.255055 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.277169 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.294980 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.317189 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.335014 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.343726 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.343774 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.343793 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.343818 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.343838 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.355719 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.389429 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"message\\\":\\\"handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:36.450683 6353 factory.go:656] Stopping watch factory\\\\nI0219 05:25:36.450684 6353 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:36.450691 6353 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:36.450703 6353 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:36.450745 6353 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.450824 6353 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451051 6353 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451689 6353 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:36.451824 6353 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451966 6353 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.452179 6353 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.446453 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.446544 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.446563 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.446617 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.446636 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.550518 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.550559 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.550576 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.550598 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.550616 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.654046 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.654114 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.654133 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.654158 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.654177 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.654827 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:54:37.280996348 +0000 UTC Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.702084 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.702120 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:38 crc kubenswrapper[5012]: E0219 05:25:38.702360 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:38 crc kubenswrapper[5012]: E0219 05:25:38.702505 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.702687 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:38 crc kubenswrapper[5012]: E0219 05:25:38.703015 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.757713 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.757948 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.758158 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.758298 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.758504 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.862404 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.862659 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.862802 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.862967 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.863097 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.965562 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.965624 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.965642 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.965671 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.965691 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.069466 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.069539 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.069556 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.069584 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.069605 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.103911 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/1.log" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.105195 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/0.log" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.111708 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c" exitCode=1 Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.111852 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.112186 5012 scope.go:117] "RemoveContainer" containerID="7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.113696 5012 scope.go:117] "RemoveContainer" containerID="a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c" Feb 19 05:25:39 crc kubenswrapper[5012]: E0219 05:25:39.114192 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.136029 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.161145 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.173850 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.173903 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.173921 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.173947 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.173965 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.197951 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"message\\\":\\\"handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:36.450683 6353 factory.go:656] Stopping watch factory\\\\nI0219 05:25:36.450684 6353 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:36.450691 6353 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:36.450703 6353 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:36.450745 6353 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.450824 6353 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451051 6353 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451689 6353 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:36.451824 6353 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451966 6353 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.452179 6353 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.219858 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.237241 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.257593 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.276034 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.277060 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.277115 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.277133 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.277161 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.277181 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.296043 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.316968 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.340362 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.364171 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.379738 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.379792 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.379810 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.379837 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.379859 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.383023 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.402043 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.483421 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.483485 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.483502 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.483528 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.483548 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.498851 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6"] Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.499631 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.502877 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.503141 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.526458 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.548223 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.568213 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.586578 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.586817 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.586977 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.587121 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.587255 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.587941 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.615803 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.635691 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.655458 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:50:04.181439461 +0000 UTC Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.659424 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.668219 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.668283 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.668349 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.668530 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvjcr\" (UniqueName: \"kubernetes.io/projected/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-kube-api-access-wvjcr\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.681738 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.691473 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.691545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.691566 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.691593 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.691617 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.720385 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"message\\\":\\\"handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:36.450683 6353 factory.go:656] Stopping watch factory\\\\nI0219 05:25:36.450684 6353 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:36.450691 6353 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:36.450703 6353 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:36.450745 6353 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.450824 6353 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451051 6353 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451689 6353 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:36.451824 6353 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451966 6353 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.452179 6353 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.740725 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.756152 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.769985 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.770040 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.770077 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.770119 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvjcr\" (UniqueName: \"kubernetes.io/projected/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-kube-api-access-wvjcr\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.771361 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.771800 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.776361 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.780107 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.795657 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.795723 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.795791 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.795852 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.795875 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.797608 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.799902 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvjcr\" (UniqueName: \"kubernetes.io/projected/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-kube-api-access-wvjcr\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.820813 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.821630 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: W0219 05:25:39.841392 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa645bc5_8cc3_45bc_be2e_7cf7d53abba0.slice/crio-19efa70e6545f17d8ca7b859d7d636406c8ffb71b4e80d68399592e11f46a8a3 WatchSource:0}: Error finding container 19efa70e6545f17d8ca7b859d7d636406c8ffb71b4e80d68399592e11f46a8a3: Status 404 returned error can't find the container with id 19efa70e6545f17d8ca7b859d7d636406c8ffb71b4e80d68399592e11f46a8a3 Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.899729 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.900529 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.900556 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.900591 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.900643 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.004724 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.004785 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.004799 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.004823 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.004837 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.107468 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.107528 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.107548 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.107576 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.107596 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.117871 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/1.log" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.130160 5012 scope.go:117] "RemoveContainer" containerID="a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.130456 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.132099 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" event={"ID":"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0","Type":"ContainerStarted","Data":"8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.132192 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" event={"ID":"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0","Type":"ContainerStarted","Data":"19efa70e6545f17d8ca7b859d7d636406c8ffb71b4e80d68399592e11f46a8a3"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.151850 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.166419 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.187792 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.204439 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.210954 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.211006 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.211025 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.211050 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.211068 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.218853 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.241024 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.259648 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.277632 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.299577 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.314291 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.314370 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.314389 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.314416 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.314437 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.322929 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.340388 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.355258 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.378579 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.417349 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.417409 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.417424 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.417445 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.417460 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.427236 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.478936 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.479187 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:25:56.479146955 +0000 UTC m=+52.512469524 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.479346 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.479457 5012 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.479598 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:56.479568685 +0000 UTC m=+52.512891294 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.521191 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.521245 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.521258 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.521288 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.521322 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.580626 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.580721 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.580805 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581013 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581042 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581063 5012 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581175 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:56.581151414 +0000 UTC m=+52.614474023 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581231 5012 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581350 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581387 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581412 5012 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581427 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:56.58138861 +0000 UTC m=+52.614711229 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581475 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:56.581453682 +0000 UTC m=+52.614776291 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.624335 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.624397 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.624415 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.624441 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.624459 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.636281 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-q5cb2"] Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.637021 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-sh856"] Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.637244 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.637389 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.637517 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.641563 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.641940 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.648374 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.649227 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.655647 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:57:43.120829457 +0000 UTC Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.664919 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.681490 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf7wt\" (UniqueName: \"kubernetes.io/projected/6e445e06-98fd-4fc2-b480-58ddf368aeb6-kube-api-access-gf7wt\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.681567 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e445e06-98fd-4fc2-b480-58ddf368aeb6-host\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.681610 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.681724 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e445e06-98fd-4fc2-b480-58ddf368aeb6-serviceca\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.681796 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7whbb\" (UniqueName: \"kubernetes.io/projected/2e231950-a365-4a82-9481-05fdac171449-kube-api-access-7whbb\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.683584 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.701257 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.702559 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.702665 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.702755 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.702587 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.703082 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.703772 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.724019 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.728677 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.728744 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.728764 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.728794 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.728819 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.743443 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.760262 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.775497 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.783165 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf7wt\" (UniqueName: \"kubernetes.io/projected/6e445e06-98fd-4fc2-b480-58ddf368aeb6-kube-api-access-gf7wt\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.783625 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e445e06-98fd-4fc2-b480-58ddf368aeb6-host\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.783811 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.783991 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e445e06-98fd-4fc2-b480-58ddf368aeb6-serviceca\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.784156 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7whbb\" (UniqueName: \"kubernetes.io/projected/2e231950-a365-4a82-9481-05fdac171449-kube-api-access-7whbb\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.784066 5012 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.784612 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs podName:2e231950-a365-4a82-9481-05fdac171449 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:41.284582049 +0000 UTC m=+37.317904648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs") pod "network-metrics-daemon-q5cb2" (UID: "2e231950-a365-4a82-9481-05fdac171449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.783819 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e445e06-98fd-4fc2-b480-58ddf368aeb6-host\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.786022 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e445e06-98fd-4fc2-b480-58ddf368aeb6-serviceca\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.793547 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.810874 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7whbb\" (UniqueName: \"kubernetes.io/projected/2e231950-a365-4a82-9481-05fdac171449-kube-api-access-7whbb\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.813812 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf7wt\" (UniqueName: \"kubernetes.io/projected/6e445e06-98fd-4fc2-b480-58ddf368aeb6-kube-api-access-gf7wt\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.816181 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.832104 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.832145 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.832165 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.832191 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.832210 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.834092 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.856613 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.877025 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.879233 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.879297 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.879353 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.879381 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.879434 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.896696 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.898898 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.902873 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.902959 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.902987 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.903015 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.903032 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.918082 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.920401 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.923575 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.923626 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.923645 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.923707 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.923727 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.942663 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.945700 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.947693 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.947748 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.947764 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.947789 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.947806 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.963080 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.963087 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.967272 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.971935 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.971990 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.972009 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.972036 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.972056 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: W0219 05:25:40.988091 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e445e06_98fd_4fc2_b480_58ddf368aeb6.slice/crio-c7d972296be7e71ee968839967902ffa3fdf68ae42fd65e0dd8a0ef74432c0c6 WatchSource:0}: Error finding container c7d972296be7e71ee968839967902ffa3fdf68ae42fd65e0dd8a0ef74432c0c6: Status 404 returned error can't find the container with id c7d972296be7e71ee968839967902ffa3fdf68ae42fd65e0dd8a0ef74432c0c6 Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.989447 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.993357 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.993802 5012 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.997515 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.997759 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.997789 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.997819 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.997842 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.012418 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.035444 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.055971 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.081454 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.102543 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.102609 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.102627 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.102653 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.102673 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.104228 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.113543 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.122021 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.138923 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" event={"ID":"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0","Type":"ContainerStarted","Data":"8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.141398 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-sh856" event={"ID":"6e445e06-98fd-4fc2-b480-58ddf368aeb6","Type":"ContainerStarted","Data":"c7d972296be7e71ee968839967902ffa3fdf68ae42fd65e0dd8a0ef74432c0c6"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.147645 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.160453 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.177088 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.191864 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.205358 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.205397 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.205420 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.205443 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.205458 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.207552 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.224799 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.238319 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.255959 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.274885 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.287023 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.290822 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:41 crc kubenswrapper[5012]: E0219 05:25:41.291032 5012 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:41 crc kubenswrapper[5012]: E0219 05:25:41.291122 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs podName:2e231950-a365-4a82-9481-05fdac171449 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:42.291097499 +0000 UTC m=+38.324420078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs") pod "network-metrics-daemon-q5cb2" (UID: "2e231950-a365-4a82-9481-05fdac171449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.302578 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.309326 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.309375 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.309390 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.309413 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.309429 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.318862 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.337162 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.352125 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.369218 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.386935 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.405805 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.413084 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.413142 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.413161 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.413189 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.413211 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.426092 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.442832 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.461397 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.492089 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.509400 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.516796 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.516855 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.516876 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.516902 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.516921 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.526390 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.548482 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.620114 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.620158 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.620167 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.620183 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.620194 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.656047 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:46:55.879408546 +0000 UTC Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.724403 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.724487 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.724510 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.724543 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.724564 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.828481 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.828550 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.828570 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.828600 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.828620 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.931956 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.932062 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.932092 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.932135 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.932163 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.035687 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.035760 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.035794 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.035825 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.035851 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.139919 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.139973 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.139985 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.140007 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.140026 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.146438 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-sh856" event={"ID":"6e445e06-98fd-4fc2-b480-58ddf368aeb6","Type":"ContainerStarted","Data":"6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.174412 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.190973 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.211850 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.232952 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.243709 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.243770 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.243785 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.243811 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.243830 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.254922 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.275884 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.292046 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.303267 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:42 crc kubenswrapper[5012]: E0219 05:25:42.303489 5012 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:42 crc kubenswrapper[5012]: E0219 05:25:42.303599 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs podName:2e231950-a365-4a82-9481-05fdac171449 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:44.303570614 +0000 UTC m=+40.336893193 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs") pod "network-metrics-daemon-q5cb2" (UID: "2e231950-a365-4a82-9481-05fdac171449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.309490 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.329657 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.347299 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.347412 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.347433 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.347464 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.347486 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.364568 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.381156 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.398081 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.412058 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.429333 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.451247 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.451346 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.451367 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.451405 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.451425 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.452294 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.474435 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.555062 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.555126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.555144 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.555176 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.555194 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.656288 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:32:05.742562111 +0000 UTC Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.659065 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.659131 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.659151 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.659181 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.659205 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.703074 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.703142 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.703137 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:42 crc kubenswrapper[5012]: E0219 05:25:42.703349 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.703394 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:42 crc kubenswrapper[5012]: E0219 05:25:42.703550 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:42 crc kubenswrapper[5012]: E0219 05:25:42.703730 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:42 crc kubenswrapper[5012]: E0219 05:25:42.703961 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.762888 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.762955 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.762973 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.763001 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.763027 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.866887 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.866951 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.866970 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.867001 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.867022 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.970920 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.971008 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.971027 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.971055 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.971077 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.074663 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.075444 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.075482 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.075511 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.075533 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.179438 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.180179 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.180361 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.180510 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.180634 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.284448 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.284674 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.284808 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.284970 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.285373 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.388969 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.389043 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.389065 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.389099 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.389118 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.492880 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.492965 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.492985 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.493014 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.493036 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.596491 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.596557 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.596576 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.596603 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.596622 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.656766 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 09:53:34.171894521 +0000 UTC Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.699608 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.699673 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.699694 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.699720 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.699737 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.803407 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.803481 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.803504 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.803536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.803558 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.906397 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.906460 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.906484 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.906518 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.906543 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.010054 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.010115 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.010130 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.010153 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.010165 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.114020 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.114082 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.114102 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.114132 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.114153 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.217946 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.218007 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.218026 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.218051 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.218074 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.321805 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.321869 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.321886 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.321912 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.321935 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.327596 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:44 crc kubenswrapper[5012]: E0219 05:25:44.327849 5012 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:44 crc kubenswrapper[5012]: E0219 05:25:44.327964 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs podName:2e231950-a365-4a82-9481-05fdac171449 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:48.327934431 +0000 UTC m=+44.361257040 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs") pod "network-metrics-daemon-q5cb2" (UID: "2e231950-a365-4a82-9481-05fdac171449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.425093 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.425142 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.425154 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.425175 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.425190 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.528649 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.528714 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.528731 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.528760 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.528778 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.631996 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.632069 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.632091 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.632117 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.632136 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.657357 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 09:52:40.495057394 +0000 UTC Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.702990 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.703158 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.703264 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:44 crc kubenswrapper[5012]: E0219 05:25:44.703182 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:44 crc kubenswrapper[5012]: E0219 05:25:44.703478 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.703516 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:44 crc kubenswrapper[5012]: E0219 05:25:44.703672 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:44 crc kubenswrapper[5012]: E0219 05:25:44.703780 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.734852 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.734919 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.734940 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.734964 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.734984 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.738436 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.760356 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.781056 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.796840 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.816843 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.838013 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.839227 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.839347 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.839368 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.839395 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.839416 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.856875 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.878072 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.898589 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.919182 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.940266 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.942789 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.942842 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.942862 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.942926 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.942948 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.961025 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.984808 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.003087 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:45Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.020505 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:45Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.045961 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:45Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.046373 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.046446 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.046466 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.046500 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.046519 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.150107 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.150811 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.150859 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.150893 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.150915 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.254451 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.254523 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.254567 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.254595 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.254617 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.357797 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.357858 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.357875 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.357928 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.357974 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.462202 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.462270 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.462286 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.462348 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.462372 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.565904 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.565976 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.565999 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.566042 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.566069 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.658371 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:46:17.807316512 +0000 UTC Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.670133 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.670214 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.670236 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.670266 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.670287 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.773820 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.773877 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.773894 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.773923 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.773942 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.877293 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.877392 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.877416 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.877446 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.877464 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.980760 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.981064 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.981199 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.981380 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.981565 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.084628 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.084669 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.084678 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.084693 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.084703 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.187759 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.188565 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.188622 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.188655 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.188680 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.291900 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.291969 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.291983 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.292000 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.292012 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.395765 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.395829 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.395846 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.395873 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.395895 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.499981 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.500046 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.500065 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.500092 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.500110 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.603900 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.603986 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.604005 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.604038 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.604066 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.659026 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:40:26.416354702 +0000 UTC Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.702711 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.702793 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.702882 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:46 crc kubenswrapper[5012]: E0219 05:25:46.702912 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:46 crc kubenswrapper[5012]: E0219 05:25:46.703043 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:46 crc kubenswrapper[5012]: E0219 05:25:46.703294 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.703570 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:46 crc kubenswrapper[5012]: E0219 05:25:46.703746 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.706456 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.706478 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.706486 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.706501 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.706512 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.810361 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.810686 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.810884 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.811127 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.811568 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.914815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.914903 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.914920 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.914944 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.914962 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.018426 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.018492 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.018511 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.018538 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.018557 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.122109 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.122207 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.122229 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.122262 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.122282 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.225739 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.225809 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.225834 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.225861 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.225878 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.329424 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.329480 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.329501 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.329527 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.329545 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.432555 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.432614 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.432627 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.432655 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.432667 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.535745 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.535790 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.535802 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.535823 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.535836 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.638659 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.638710 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.638734 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.638763 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.638788 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.659437 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 15:01:01.889708936 +0000 UTC Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.741846 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.741912 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.741923 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.741947 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.741960 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.845026 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.845102 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.845126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.845159 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.845182 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.948682 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.948779 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.948803 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.948836 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.948860 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.051940 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.052004 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.052023 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.052061 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.052082 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.155677 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.155757 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.155776 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.155809 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.155829 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.259533 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.259609 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.259630 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.259661 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.259684 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.363109 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.363181 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.363201 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.363231 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.363254 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.378021 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:48 crc kubenswrapper[5012]: E0219 05:25:48.378256 5012 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:48 crc kubenswrapper[5012]: E0219 05:25:48.378404 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs podName:2e231950-a365-4a82-9481-05fdac171449 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:56.378371265 +0000 UTC m=+52.411693874 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs") pod "network-metrics-daemon-q5cb2" (UID: "2e231950-a365-4a82-9481-05fdac171449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.467678 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.467745 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.467763 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.467794 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.467813 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.571093 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.571160 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.571183 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.571447 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.571473 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.660221 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 23:47:51.643934886 +0000 UTC Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.674841 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.674888 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.674904 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.674929 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.674948 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.703568 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.703643 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.703705 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:48 crc kubenswrapper[5012]: E0219 05:25:48.703755 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:48 crc kubenswrapper[5012]: E0219 05:25:48.703937 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.703956 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:48 crc kubenswrapper[5012]: E0219 05:25:48.704101 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:48 crc kubenswrapper[5012]: E0219 05:25:48.704208 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.778631 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.778727 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.778745 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.778803 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.778822 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.881937 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.881996 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.882016 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.882039 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.882059 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.986210 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.986361 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.986383 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.986450 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.986478 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.090075 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.090138 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.090157 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.090183 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.090201 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.192714 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.192773 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.192794 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.192821 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.192839 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.296750 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.296820 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.296844 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.296878 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.296907 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.400585 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.400642 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.400659 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.400685 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.400706 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.503968 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.504052 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.504073 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.504102 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.504122 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.607677 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.607748 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.607765 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.607794 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.607816 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.660576 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:44:21.960285625 +0000 UTC Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.710663 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.710717 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.710733 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.710757 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.710775 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.813376 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.813446 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.813474 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.813506 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.813530 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.917902 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.917963 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.918024 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.918052 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.918070 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.021422 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.021487 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.021505 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.021533 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.021552 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.125400 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.125473 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.125493 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.125521 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.125543 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.228679 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.228740 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.228755 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.228777 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.228793 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.332355 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.332405 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.332417 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.332433 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.332445 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.436078 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.436163 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.436181 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.436208 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.436255 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.539545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.539687 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.539708 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.539734 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.539753 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.642429 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.642469 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.642478 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.642494 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.642504 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.660720 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 07:34:24.460995385 +0000 UTC Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.702492 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.702581 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.702590 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:50 crc kubenswrapper[5012]: E0219 05:25:50.702746 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.702830 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:50 crc kubenswrapper[5012]: E0219 05:25:50.702931 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:50 crc kubenswrapper[5012]: E0219 05:25:50.703194 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:50 crc kubenswrapper[5012]: E0219 05:25:50.703399 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.745989 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.746058 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.746076 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.746104 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.746123 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.849742 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.849809 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.849827 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.849858 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.849878 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.953230 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.953377 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.953408 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.953440 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.953464 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.045466 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.045542 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.045561 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.045591 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.045611 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: E0219 05:25:51.066604 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:51Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.072113 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.072208 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.072254 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.072282 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.072340 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: E0219 05:25:51.094071 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:51Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.099989 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.100034 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.100048 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.100069 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.100081 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: E0219 05:25:51.119430 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:51Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.124205 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.124258 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.124275 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.124326 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.124346 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: E0219 05:25:51.145542 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:51Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.150536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.150612 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.150640 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.150672 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.150695 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: E0219 05:25:51.174728 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:51Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:51 crc kubenswrapper[5012]: E0219 05:25:51.174938 5012 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.177186 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.177268 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.177295 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.177372 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.177400 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.280197 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.280236 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.280247 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.280263 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.280276 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.383533 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.383596 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.383615 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.383642 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.383661 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.486814 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.486925 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.486951 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.486985 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.487009 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.591077 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.591172 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.591191 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.591225 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.591248 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.661083 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:22:04.77977049 +0000 UTC Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.696991 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.697104 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.697130 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.697177 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.697198 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.800814 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.800881 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.800900 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.800926 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.800944 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.904055 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.904123 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.904143 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.904173 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.904195 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.007939 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.007998 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.008015 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.008040 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.008057 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.111967 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.112096 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.112174 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.112211 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.112238 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.215344 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.215405 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.215423 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.215451 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.215469 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.319410 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.319468 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.319485 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.319511 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.319529 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.422945 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.423007 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.423024 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.423049 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.423072 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.526965 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.527026 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.527050 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.527082 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.527104 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.630355 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.630446 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.630471 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.630504 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.630525 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.662228 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:46:38.149322131 +0000 UTC Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.702267 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.702800 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.702861 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.702826 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:52 crc kubenswrapper[5012]: E0219 05:25:52.703054 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.703422 5012 scope.go:117] "RemoveContainer" containerID="a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c" Feb 19 05:25:52 crc kubenswrapper[5012]: E0219 05:25:52.703916 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:52 crc kubenswrapper[5012]: E0219 05:25:52.704130 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:52 crc kubenswrapper[5012]: E0219 05:25:52.704141 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.734641 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.734697 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.734713 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.734742 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.734761 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.838279 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.839127 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.839151 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.839184 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.839207 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.943277 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.943369 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.943393 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.943420 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.943438 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.046834 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.046887 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.046897 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.046914 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.046926 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.150566 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.150636 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.150667 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.150701 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.150725 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.190073 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/1.log" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.194400 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.195121 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.253441 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.253487 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.253498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.253517 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.253532 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.255167 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.274249 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.290054 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.304234 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.318702 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.330480 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.346424 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.355948 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.355973 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.355981 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.355998 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.356009 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.364639 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.381452 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.395852 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.407217 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.425959 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.438918 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.456346 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.458348 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.458419 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.458431 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.458449 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.458461 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.473488 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.499982 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.561971 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.562014 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.562025 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.562045 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.562058 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.663055 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 13:45:49.662748424 +0000 UTC Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.664392 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.664437 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.664450 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.664469 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.664482 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.767262 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.767317 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.767326 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.767341 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.767351 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.870919 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.870963 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.870974 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.870992 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.871005 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.974614 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.974660 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.974672 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.974691 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.974707 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.076792 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.076874 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.076895 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.076927 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.076955 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.180436 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.180507 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.180530 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.180557 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.180575 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.201215 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/2.log" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.202289 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/1.log" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.205977 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f" exitCode=1 Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.206053 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.206115 5012 scope.go:117] "RemoveContainer" containerID="a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.206781 5012 scope.go:117] "RemoveContainer" containerID="13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f" Feb 19 05:25:54 crc kubenswrapper[5012]: E0219 05:25:54.206967 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.232209 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.244961 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.259544 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.275762 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.283526 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.283751 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.283901 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.284058 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.284199 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.297887 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.315740 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.332974 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.354072 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.377573 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.387924 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.387973 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.387992 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.388019 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.388038 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.401685 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.423217 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.439569 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.456024 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.475455 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.491520 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.491601 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.491621 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.491650 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.491670 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.496758 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.519528 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.594695 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.594750 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.594766 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.594791 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.594812 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.664279 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:53:26.037991606 +0000 UTC Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.698181 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.698242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.698259 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.698284 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.698334 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.702695 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.702756 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.702789 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:54 crc kubenswrapper[5012]: E0219 05:25:54.702952 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.702978 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:54 crc kubenswrapper[5012]: E0219 05:25:54.703133 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:54 crc kubenswrapper[5012]: E0219 05:25:54.703295 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:54 crc kubenswrapper[5012]: E0219 05:25:54.703480 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.722676 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.742994 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.761841 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.783908 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.796881 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.801973 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.802031 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.802048 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.802073 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.802096 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.810121 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.812116 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.834828 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.858585 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.877068 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.892151 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.905375 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.905458 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.905482 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.905514 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.905533 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.915440 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.948846 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.972732 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.996751 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.011452 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.011558 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.011577 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.011636 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.011657 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.024809 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.042708 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.061841 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.076115 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.096805 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.114158 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.116446 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.116500 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.116542 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.116572 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.116590 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.130185 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.148503 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.164992 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.181927 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.207463 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.214049 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/2.log" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.219079 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.219152 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.219181 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.219212 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.219238 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.219795 5012 scope.go:117] "RemoveContainer" containerID="13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f" Feb 19 05:25:55 crc kubenswrapper[5012]: E0219 05:25:55.220077 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.224738 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.239778 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.274584 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.309588 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.322157 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.322199 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.322211 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.322242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.322256 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.327396 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.339768 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.353005 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.365840 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.387795 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.404791 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.419145 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.424459 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.424495 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.424507 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.424527 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.424543 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.430396 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.442208 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.453553 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.465265 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.474072 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.489677 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.506621 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.524144 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.526803 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.526846 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.526859 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.526875 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.526889 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.538093 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.554386 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.563899 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.573694 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.590518 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.605938 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.621647 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.629242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.629295 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.629343 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.629366 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.629385 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.664471 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 16:05:04.018209423 +0000 UTC Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.732712 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.732769 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.732789 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.732815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.732833 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.835776 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.835851 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.835876 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.835908 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.835931 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.938472 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.938532 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.938554 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.938580 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.938602 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.042267 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.042356 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.042376 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.042401 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.042420 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.145992 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.146057 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.146074 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.146099 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.146118 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.249102 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.249171 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.249195 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.249223 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.249247 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.352839 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.352902 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.352923 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.352950 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.352967 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.389465 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.389674 5012 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.389757 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs podName:2e231950-a365-4a82-9481-05fdac171449 nodeName:}" failed. No retries permitted until 2026-02-19 05:26:12.389735866 +0000 UTC m=+68.423058435 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs") pod "network-metrics-daemon-q5cb2" (UID: "2e231950-a365-4a82-9481-05fdac171449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.456012 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.456069 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.456093 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.456126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.456149 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.490908 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.491106 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:26:28.491070259 +0000 UTC m=+84.524392868 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.491354 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.491485 5012 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.491574 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:26:28.491556141 +0000 UTC m=+84.524878710 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.558404 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.558443 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.558454 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.558470 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.558480 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.592569 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.592633 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.592678 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.592826 5012 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.592893 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:26:28.592876823 +0000 UTC m=+84.626199402 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.592906 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.592953 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.593004 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.593021 5012 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.593086 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 05:26:28.593066328 +0000 UTC m=+84.626388967 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.592969 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.593126 5012 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.593251 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 05:26:28.593212182 +0000 UTC m=+84.626534781 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.661694 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.661765 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.661785 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.661815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.661838 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.664799 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:34:19.379512092 +0000 UTC Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.702410 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.702771 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.702765 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.702960 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.702960 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.703128 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.703381 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.703513 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.764696 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.764815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.764838 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.764862 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.764939 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.868810 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.869226 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.869283 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.869364 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.869386 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.973178 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.973242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.973261 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.973286 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.973327 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.076015 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.076089 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.076113 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.076139 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.076159 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.179493 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.179583 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.179606 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.179635 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.179656 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.282279 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.282362 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.282379 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.282417 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.282436 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.385419 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.385482 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.385503 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.385530 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.385550 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.488367 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.488431 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.488448 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.488477 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.488498 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.590927 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.591009 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.591023 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.591075 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.591093 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.665779 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:26:19.037775697 +0000 UTC Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.694376 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.694424 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.694435 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.694459 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.694471 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.798086 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.798126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.798136 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.798151 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.798164 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.901241 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.901388 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.901409 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.901478 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.901496 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.004692 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.004768 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.004791 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.004821 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.004845 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.108964 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.109051 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.109066 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.109087 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.109101 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.212181 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.212677 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.212878 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.213040 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.213170 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.316685 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.316727 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.316741 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.316763 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.316777 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.420395 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.420987 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.421185 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.421397 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.421584 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.525580 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.525626 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.525637 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.525655 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.525667 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.628716 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.628793 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.628813 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.628840 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.628857 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.666400 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:01:21.933126863 +0000 UTC Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.702211 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.702247 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.702265 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.702492 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:58 crc kubenswrapper[5012]: E0219 05:25:58.702479 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:58 crc kubenswrapper[5012]: E0219 05:25:58.702623 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:58 crc kubenswrapper[5012]: E0219 05:25:58.702723 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:58 crc kubenswrapper[5012]: E0219 05:25:58.702818 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.732817 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.732887 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.732912 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.732945 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.732969 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.836354 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.836404 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.836421 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.836449 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.836475 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.940371 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.940472 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.940498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.941014 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.941045 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.043440 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.043498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.043521 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.043549 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.043571 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.147040 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.147105 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.147128 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.147159 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.147182 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.249335 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.249395 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.249414 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.249453 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.249478 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.352692 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.352762 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.352813 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.352842 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.352862 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.455914 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.455966 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.455977 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.455997 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.456009 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.559621 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.559680 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.559696 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.559722 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.559741 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.663282 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.663388 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.663414 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.663447 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.663486 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.667385 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 13:23:11.802605001 +0000 UTC Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.766185 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.766223 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.766240 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.766262 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.766281 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.870041 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.870113 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.870132 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.870161 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.870182 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.973680 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.973752 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.973775 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.973806 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.973828 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.077250 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.077353 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.077372 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.077397 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.077415 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.181670 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.181756 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.181819 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.181861 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.181890 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.285019 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.285106 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.285126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.285153 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.285172 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.387953 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.388001 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.388012 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.388032 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.388044 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.495700 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.495780 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.495845 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.496796 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.496873 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.600405 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.600485 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.600508 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.600537 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.600558 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.668166 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:48:42.895356016 +0000 UTC Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.701970 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.702005 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.702086 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.702190 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:00 crc kubenswrapper[5012]: E0219 05:26:00.702457 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:00 crc kubenswrapper[5012]: E0219 05:26:00.702885 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:00 crc kubenswrapper[5012]: E0219 05:26:00.703080 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:00 crc kubenswrapper[5012]: E0219 05:26:00.703156 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.704457 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.704562 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.704586 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.704654 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.704676 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.807647 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.808075 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.808196 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.808420 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.808559 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.911568 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.911644 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.911667 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.911695 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.911713 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.014347 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.014408 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.014427 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.014453 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.014476 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.117165 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.117232 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.117258 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.117288 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.117369 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.220067 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.220126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.220146 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.220176 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.220200 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.323355 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.323408 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.323426 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.323449 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.323468 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.426294 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.426374 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.426391 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.426415 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.426434 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.495575 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.495666 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.495683 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.495741 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.495759 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: E0219 05:26:01.518849 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:01Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.522783 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.522823 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.522841 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.522865 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.522882 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: E0219 05:26:01.539919 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:01Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.544513 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.544581 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.544606 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.544638 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.544664 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: E0219 05:26:01.560918 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:01Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.565072 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.565247 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.565391 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.565545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.565666 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: E0219 05:26:01.584862 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:01Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.589422 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.589490 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.589510 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.589540 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.589564 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: E0219 05:26:01.605384 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:01Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:01 crc kubenswrapper[5012]: E0219 05:26:01.606217 5012 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.608167 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.608351 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.608459 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.608581 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.608714 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.669178 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:20:22.94611273 +0000 UTC Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.713477 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.713545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.713564 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.713591 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.713609 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.816410 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.816476 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.816493 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.816520 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.816537 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.919875 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.919903 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.919911 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.919925 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.919936 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.029865 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.029954 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.029981 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.030013 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.030039 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.133000 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.133060 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.133077 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.133104 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.133125 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.236031 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.236110 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.236132 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.236162 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.236181 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.339090 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.339157 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.339175 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.339205 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.339223 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.442018 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.442082 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.442103 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.442130 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.442147 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.544979 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.545046 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.545067 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.545095 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.545116 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.647727 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.647784 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.647794 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.647814 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.647830 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.670287 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:43:53.883796046 +0000 UTC Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.702082 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.702128 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.702146 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:02 crc kubenswrapper[5012]: E0219 05:26:02.702246 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.702345 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:02 crc kubenswrapper[5012]: E0219 05:26:02.702579 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:02 crc kubenswrapper[5012]: E0219 05:26:02.702583 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:02 crc kubenswrapper[5012]: E0219 05:26:02.702642 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.750335 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.750378 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.750387 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.750406 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.750419 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.853932 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.853984 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.854001 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.854028 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.854046 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.957010 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.957060 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.957076 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.957099 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.957116 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.059901 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.059955 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.059972 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.060002 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.060021 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.163401 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.163467 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.163485 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.163512 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.163530 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.266291 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.266378 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.266395 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.266421 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.266437 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.369998 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.370065 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.370083 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.370109 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.370130 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.472938 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.472977 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.472988 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.473004 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.473015 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.575340 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.575402 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.575421 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.575445 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.575463 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.671195 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 00:21:35.039953499 +0000 UTC Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.679121 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.679277 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.679350 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.679387 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.679411 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.782748 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.782817 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.782835 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.782860 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.782879 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.885392 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.885451 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.885468 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.885498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.885518 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.989654 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.989736 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.989761 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.989792 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.989816 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.093759 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.093859 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.093880 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.093950 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.093970 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.198168 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.198227 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.198245 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.198273 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.198291 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.301757 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.301827 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.301848 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.301875 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.301897 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.412193 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.412268 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.412289 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.412362 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.412386 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.516029 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.516095 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.516117 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.516146 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.516167 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.619972 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.620040 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.620057 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.620085 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.620107 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.671910 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:04:15.977636594 +0000 UTC Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.702181 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.702263 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.702263 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:04 crc kubenswrapper[5012]: E0219 05:26:04.702432 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.702457 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:04 crc kubenswrapper[5012]: E0219 05:26:04.702583 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:04 crc kubenswrapper[5012]: E0219 05:26:04.702809 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:04 crc kubenswrapper[5012]: E0219 05:26:04.702944 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.723463 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.723550 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.723574 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.723610 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.723634 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.724247 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.748029 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.784567 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.804820 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.826839 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.826926 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.826943 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.827005 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.827023 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.827628 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.849825 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.867216 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.883871 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.904104 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.917541 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.929788 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.929852 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.929875 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.929908 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.929934 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.940209 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.960336 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.985008 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.003971 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:05Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.020700 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:05Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.033031 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.033092 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.033114 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.033137 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.033158 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.041334 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:05Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.061775 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:05Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.136191 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.136269 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.136288 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.136435 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.136465 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.240063 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.240630 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.240646 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.240674 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.240690 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.344171 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.344275 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.344294 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.344329 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.344346 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.447480 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.447545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.447565 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.447592 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.447611 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.551391 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.551472 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.551490 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.551515 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.551532 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.654836 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.654887 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.654903 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.654924 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.654937 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.672508 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:26:03.654661397 +0000 UTC Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.757860 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.757920 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.757936 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.757962 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.757979 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.862256 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.862350 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.862375 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.862410 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.862431 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.966069 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.966129 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.966150 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.966178 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.966195 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.069829 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.069926 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.069946 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.069973 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.069997 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.173386 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.173449 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.173466 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.173491 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.173509 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.276984 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.277030 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.277046 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.277068 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.277084 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.380128 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.380493 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.380684 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.380887 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.381024 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.489727 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.490102 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.490262 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.490467 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.490619 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.594418 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.594887 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.595044 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.595195 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.595358 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.673377 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:32:08.195725191 +0000 UTC Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.699444 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.699677 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.699820 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.699950 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.700074 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.703491 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.703698 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.703559 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.703498 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:06 crc kubenswrapper[5012]: E0219 05:26:06.704096 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:06 crc kubenswrapper[5012]: E0219 05:26:06.704361 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:06 crc kubenswrapper[5012]: E0219 05:26:06.705044 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:06 crc kubenswrapper[5012]: E0219 05:26:06.705252 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.706354 5012 scope.go:117] "RemoveContainer" containerID="13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f" Feb 19 05:26:06 crc kubenswrapper[5012]: E0219 05:26:06.706859 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.804177 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.804224 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.804234 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.804255 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.804265 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.907555 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.907601 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.907614 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.907635 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.907654 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.010668 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.010719 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.010734 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.010755 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.010769 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.113718 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.113768 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.113783 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.113801 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.113814 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.216498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.216563 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.216579 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.216605 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.216623 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.320108 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.320177 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.320196 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.320248 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.320267 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.423098 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.423134 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.423162 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.423179 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.423189 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.526777 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.526854 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.526878 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.526909 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.526930 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.630699 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.630796 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.630816 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.630846 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.630867 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.674029 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:34:57.646779497 +0000 UTC Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.734821 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.734957 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.734979 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.735006 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.735025 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.838246 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.838376 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.838403 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.838441 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.838471 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.940849 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.940923 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.940948 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.940979 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.941008 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.044011 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.044066 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.044082 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.044113 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.044132 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.146936 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.146994 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.147012 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.147036 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.147056 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.249662 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.249721 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.249742 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.249766 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.249784 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.352680 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.352761 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.352787 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.352817 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.352843 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.456211 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.456264 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.456281 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.456334 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.456354 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.558513 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.558545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.558553 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.558570 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.558579 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.661571 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.661604 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.661619 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.661639 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.661656 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.675100 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:59:05.875333516 +0000 UTC Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.702492 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.702594 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:08 crc kubenswrapper[5012]: E0219 05:26:08.702665 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.702710 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.702747 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:08 crc kubenswrapper[5012]: E0219 05:26:08.702896 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:08 crc kubenswrapper[5012]: E0219 05:26:08.703037 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:08 crc kubenswrapper[5012]: E0219 05:26:08.703159 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.764399 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.764427 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.764436 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.764451 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.764460 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.866477 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.866508 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.866517 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.866529 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.866538 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.968749 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.968770 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.968779 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.968792 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.968801 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.072536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.072571 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.072580 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.072594 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.072602 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.174820 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.174858 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.174866 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.174882 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.174892 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.276392 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.276431 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.276441 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.276456 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.276467 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.379103 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.379141 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.379150 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.379164 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.379174 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.481719 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.481790 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.481813 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.481843 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.481865 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.585002 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.585042 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.585054 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.585071 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.585082 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.675598 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:50:56.2042401 +0000 UTC Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.687811 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.687840 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.687850 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.687862 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.687872 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.790952 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.791040 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.791069 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.791101 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.791129 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.894493 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.894548 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.894559 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.894574 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.894587 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.997283 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.997334 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.997343 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.997356 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.997364 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.101052 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.101096 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.101106 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.101121 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.101132 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.204388 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.204435 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.204452 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.204478 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.204496 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.307615 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.307663 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.307675 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.307693 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.307707 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.409791 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.409843 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.409862 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.409885 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.409904 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.512372 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.512429 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.512447 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.512469 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.512488 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.615164 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.615218 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.615235 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.615255 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.615271 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.675951 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:31:00.295641366 +0000 UTC Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.702516 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.702534 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.702568 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.702625 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:10 crc kubenswrapper[5012]: E0219 05:26:10.702789 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:10 crc kubenswrapper[5012]: E0219 05:26:10.702933 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:10 crc kubenswrapper[5012]: E0219 05:26:10.703126 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:10 crc kubenswrapper[5012]: E0219 05:26:10.703348 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.717489 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.717532 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.717551 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.717574 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.717591 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.819979 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.820031 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.820046 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.820073 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.820090 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.923289 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.923394 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.923417 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.923449 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.923473 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.026801 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.026852 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.026869 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.026891 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.026907 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.129461 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.129496 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.129508 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.129525 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.129538 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.231663 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.231689 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.231698 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.231708 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.231717 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.333610 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.333899 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.333960 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.334021 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.334080 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.436950 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.436998 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.437012 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.437031 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.437045 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.540479 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.540536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.540552 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.540577 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.540592 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.643087 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.643153 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.643164 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.643183 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.643198 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.676950 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:51:16.405904806 +0000 UTC Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.745511 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.745741 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.745809 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.745886 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.745940 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.848438 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.848463 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.848471 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.848485 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.848495 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.951720 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.951773 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.951786 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.951805 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.951818 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.971986 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.972052 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.972071 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.972096 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.972116 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: E0219 05:26:11.986848 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:11Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.991713 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.991770 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.991788 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.991815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.991835 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.004431 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:12Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.008316 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.008421 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.008628 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.008714 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.008774 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.024394 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:12Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.027377 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.027472 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.027528 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.027588 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.027641 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.037922 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:12Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.041411 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.041452 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.041461 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.041477 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.041489 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.054793 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:12Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.055038 5012 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.057428 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.057531 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.057594 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.057661 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.057736 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.160088 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.160118 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.160126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.160138 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.160147 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.262207 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.262253 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.262264 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.262284 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.262320 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.364806 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.365090 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.365154 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.365220 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.365290 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.393971 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.394293 5012 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.394466 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs podName:2e231950-a365-4a82-9481-05fdac171449 nodeName:}" failed. No retries permitted until 2026-02-19 05:26:44.394432892 +0000 UTC m=+100.427755531 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs") pod "network-metrics-daemon-q5cb2" (UID: "2e231950-a365-4a82-9481-05fdac171449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.468522 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.468626 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.468645 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.468702 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.468722 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.574557 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.574593 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.574604 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.574621 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.574634 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.677061 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 03:07:50.09834755 +0000 UTC Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.677539 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.677619 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.677637 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.677671 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.677724 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.702476 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.702509 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.702579 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.702726 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.703020 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.703344 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.703441 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.703641 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.780873 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.780947 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.780972 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.781004 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.781023 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.884651 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.884692 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.884701 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.884719 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.884729 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.986897 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.986955 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.986975 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.986995 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.987010 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.090099 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.090134 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.090148 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.090163 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.090173 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.193410 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.193440 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.193450 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.193464 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.193474 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.295526 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.295552 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.295561 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.295574 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.295584 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.397865 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.397942 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.397963 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.397985 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.398002 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.501381 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.501468 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.501487 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.501540 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.501556 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.604151 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.604236 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.604255 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.604282 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.604335 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.677370 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:01:02.128881556 +0000 UTC Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.707727 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.707777 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.707788 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.707806 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.707818 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.811158 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.811246 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.811278 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.811353 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.811384 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.914958 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.915145 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.915502 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.915546 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.915574 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.018985 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.019047 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.019063 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.019090 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.019110 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.121849 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.121907 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.121926 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.121946 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.121958 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.225209 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.225258 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.225267 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.225285 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.225295 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.289460 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/0.log" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.289514 5012 generic.go:334] "Generic (PLEG): container finished" podID="e7a04e36-fbaa-4de1-871a-7225433eebb0" containerID="10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061" exitCode=1 Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.289565 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lkrsg" event={"ID":"e7a04e36-fbaa-4de1-871a-7225433eebb0","Type":"ContainerDied","Data":"10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.290683 5012 scope.go:117] "RemoveContainer" containerID="10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.310153 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:13Z\\\",\\\"message\\\":\\\"2026-02-19T05:25:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905\\\\n2026-02-19T05:25:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905 to /host/opt/cni/bin/\\\\n2026-02-19T05:25:28Z [verbose] multus-daemon started\\\\n2026-02-19T05:25:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T05:26:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.330094 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.330483 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.330539 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.330558 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.330585 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.330603 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.345582 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.362369 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.382581 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.406615 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.427625 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.433854 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.433906 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.433918 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.433934 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.433946 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.443180 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.464492 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.480434 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.502034 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.522592 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.537152 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.537210 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.537227 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.537258 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.537278 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.555924 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.576134 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.589873 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.606043 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.622672 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.639569 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.639632 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.639650 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.639682 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.639704 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.677773 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 12:53:00.305287547 +0000 UTC Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.702191 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.702273 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:14 crc kubenswrapper[5012]: E0219 05:26:14.702343 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.702458 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.702555 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:14 crc kubenswrapper[5012]: E0219 05:26:14.702595 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:14 crc kubenswrapper[5012]: E0219 05:26:14.702746 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:14 crc kubenswrapper[5012]: E0219 05:26:14.702899 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.723045 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.741211 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.741234 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.741242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.741255 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.741264 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.741746 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.765575 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.782026 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.798479 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.818873 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.835809 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.844570 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.844621 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.844639 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.844668 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.844688 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.855062 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.870970 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.893169 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.908585 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.924769 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.938399 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.947346 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.947426 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.947442 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.947462 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.947496 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.951221 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.965900 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:13Z\\\",\\\"message\\\":\\\"2026-02-19T05:25:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905\\\\n2026-02-19T05:25:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905 to /host/opt/cni/bin/\\\\n2026-02-19T05:25:28Z [verbose] multus-daemon started\\\\n2026-02-19T05:25:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T05:26:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.980806 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.995088 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.050489 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.050539 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.050551 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.050573 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.050600 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.153665 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.153714 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.153724 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.153741 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.153752 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.257021 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.257071 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.257084 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.257101 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.257112 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.295561 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/0.log" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.295636 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lkrsg" event={"ID":"e7a04e36-fbaa-4de1-871a-7225433eebb0","Type":"ContainerStarted","Data":"fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.308735 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.322773 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.341557 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:13Z\\\",\\\"message\\\":\\\"2026-02-19T05:25:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905\\\\n2026-02-19T05:25:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905 to /host/opt/cni/bin/\\\\n2026-02-19T05:25:28Z [verbose] multus-daemon started\\\\n2026-02-19T05:25:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T05:26:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.362570 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.362912 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.362952 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.362971 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.362998 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.363017 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.378848 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.398294 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.415233 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.436793 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.452274 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.465570 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.465620 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.465636 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.465660 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.465678 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.467397 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.484865 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.503109 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.529715 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.560411 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.568253 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.568540 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.568618 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.568696 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.568763 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.583592 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.604728 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.618419 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.671136 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.671178 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.671187 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.671203 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.671214 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.678290 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:03:04.08401194 +0000 UTC Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.775106 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.775152 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.775160 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.775175 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.775185 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.878295 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.878366 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.878381 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.878401 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.878415 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.981497 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.981559 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.981577 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.981605 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.981625 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.084823 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.084855 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.084866 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.084880 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.084890 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.187579 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.187640 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.187661 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.187688 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.187708 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.290662 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.290709 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.290728 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.290752 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.290770 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.393482 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.393543 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.393565 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.393593 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.393614 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.496578 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.496649 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.496667 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.496694 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.496713 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.599536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.599574 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.599585 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.599601 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.599612 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.679380 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:26:38.645619917 +0000 UTC Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702538 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702572 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702561 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702577 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:16 crc kubenswrapper[5012]: E0219 05:26:16.702709 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702859 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702888 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702903 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702932 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: E0219 05:26:16.702926 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702950 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: E0219 05:26:16.703141 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:16 crc kubenswrapper[5012]: E0219 05:26:16.703029 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.805899 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.805928 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.805937 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.805951 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.805963 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.908685 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.908750 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.908758 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.908777 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.908791 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.012430 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.012497 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.012507 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.012525 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.012536 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.139743 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.139826 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.139842 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.139861 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.139875 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.245211 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.245247 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.245255 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.245271 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.245284 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.348392 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.348431 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.348442 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.348459 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.348471 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.451911 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.451946 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.451958 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.451974 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.451985 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.555447 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.555476 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.555484 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.555498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.555507 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.658335 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.658402 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.658423 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.658450 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.658471 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.680548 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:55:41.129530635 +0000 UTC Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.761549 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.761603 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.761622 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.761650 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.761667 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.864803 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.864861 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.864878 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.864906 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.864923 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.968051 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.968118 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.968142 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.968175 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.968211 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.071166 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.071232 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.071248 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.071273 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.071294 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.173513 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.173583 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.173601 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.173628 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.173645 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.276042 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.276111 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.276134 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.276160 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.276177 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.379825 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.379896 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.379913 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.379939 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.379956 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.483065 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.483114 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.483133 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.483157 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.483176 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.585186 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.585243 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.585265 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.585292 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.585360 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.681726 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 00:29:25.761089959 +0000 UTC Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.687465 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.687518 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.687536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.687586 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.687605 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.701977 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.702027 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.702100 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:18 crc kubenswrapper[5012]: E0219 05:26:18.702349 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.702408 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:18 crc kubenswrapper[5012]: E0219 05:26:18.702537 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:18 crc kubenswrapper[5012]: E0219 05:26:18.702634 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:18 crc kubenswrapper[5012]: E0219 05:26:18.702733 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.703769 5012 scope.go:117] "RemoveContainer" containerID="13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.790410 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.790764 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.790782 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.790807 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.790826 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.901252 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.901298 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.901338 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.901362 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.901380 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.005144 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.005201 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.005218 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.005242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.005259 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.108750 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.108809 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.108825 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.108849 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.108865 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.211862 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.211911 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.211927 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.211950 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.211966 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.312971 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/2.log" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.316459 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.316497 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.316508 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.316525 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.316536 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.318787 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.319212 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.343643 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.366875 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.400821 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.418978 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.422261 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.422289 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.422334 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.422349 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.422359 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.436518 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.450348 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.465354 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.483696 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.499350 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.515888 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:13Z\\\",\\\"message\\\":\\\"2026-02-19T05:25:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905\\\\n2026-02-19T05:25:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905 to /host/opt/cni/bin/\\\\n2026-02-19T05:25:28Z [verbose] multus-daemon started\\\\n2026-02-19T05:25:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T05:26:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.524616 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.524636 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.524643 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.524656 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.524664 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.539231 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.560076 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.573896 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.586636 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.606901 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.621915 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.627720 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.627945 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.628134 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.628286 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.628465 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.639349 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.681987 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:00:45.35633727 +0000 UTC Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.731284 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.731686 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.731900 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.732063 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.732208 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.834541 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.834917 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.835081 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.835229 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.835399 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.937659 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.937692 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.937700 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.937715 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.937724 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.040526 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.040808 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.040972 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.041152 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.041340 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.144944 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.145229 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.145424 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.145556 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.145717 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.248984 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.249442 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.249572 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.249693 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.249841 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.325143 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/3.log" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.326861 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/2.log" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.331439 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" exitCode=1 Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.331655 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.331761 5012 scope.go:117] "RemoveContainer" containerID="13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.332889 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:26:20 crc kubenswrapper[5012]: E0219 05:26:20.333388 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.352398 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.353172 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.353216 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.353232 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.353256 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.353273 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.374840 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.392834 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.409010 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.425213 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.443127 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.458180 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.458265 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.458289 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.458389 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.458416 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.465454 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:13Z\\\",\\\"message\\\":\\\"2026-02-19T05:25:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905\\\\n2026-02-19T05:25:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905 to /host/opt/cni/bin/\\\\n2026-02-19T05:25:28Z [verbose] multus-daemon started\\\\n2026-02-19T05:25:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T05:26:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.480379 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.500572 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.522208 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.541763 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.561123 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.561583 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.561662 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.561688 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.561720 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.561746 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.590269 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.607763 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.627351 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.648788 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.666355 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.666415 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.666432 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.666456 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.666473 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.682966 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:12:34.208625139 +0000 UTC Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.685670 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:20Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI0219 05:26:19.894801 7304 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:26:19.895084 7304 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:26:19.895385 7304 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:26:19.895484 7304 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:26:19.895522 7304 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:26:19.895545 7304 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:26:19.895554 7304 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:26:19.895558 7304 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:26:19.895561 7304 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:26:19.895582 7304 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:26:19.895582 7304 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 05:26:19.895591 7304 factory.go:656] Stopping watch factory\\\\nI0219 05:26:19.895608 7304 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.702126 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.702187 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.702160 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:20 crc kubenswrapper[5012]: E0219 05:26:20.702368 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.702453 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:20 crc kubenswrapper[5012]: E0219 05:26:20.702572 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:20 crc kubenswrapper[5012]: E0219 05:26:20.702681 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:20 crc kubenswrapper[5012]: E0219 05:26:20.702822 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.769246 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.769341 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.769359 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.769383 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.769403 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.872545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.872613 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.872629 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.872652 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.872669 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.976258 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.976348 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.976369 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.976395 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.976413 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.080428 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.080492 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.080510 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.080536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.080556 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.183138 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.183203 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.183220 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.183245 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.183264 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.285232 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.285272 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.285284 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.285319 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.285333 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.338274 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/3.log" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.343849 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:26:21 crc kubenswrapper[5012]: E0219 05:26:21.344105 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.362611 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.378098 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.387446 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.387484 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.387496 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.387515 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.387528 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.391269 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.410063 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.430565 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.449274 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.469275 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:13Z\\\",\\\"message\\\":\\\"2026-02-19T05:25:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905\\\\n2026-02-19T05:25:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905 to /host/opt/cni/bin/\\\\n2026-02-19T05:25:28Z [verbose] multus-daemon started\\\\n2026-02-19T05:25:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T05:26:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.487158 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.490233 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.490282 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.490328 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.490355 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.490374 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.502860 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.523255 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.541955 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.562025 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.579296 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.593677 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.593731 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.593750 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.593776 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.593793 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.606139 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.626196 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.648264 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.680027 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:20Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI0219 05:26:19.894801 7304 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:26:19.895084 7304 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:26:19.895385 7304 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:26:19.895484 7304 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:26:19.895522 7304 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:26:19.895545 7304 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:26:19.895554 7304 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:26:19.895558 7304 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:26:19.895561 7304 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:26:19.895582 7304 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:26:19.895582 7304 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 05:26:19.895591 7304 factory.go:656] Stopping watch factory\\\\nI0219 05:26:19.895608 7304 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:26:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.683983 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:11:11.370053429 +0000 UTC Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.696220 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.696278 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.696295 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.696350 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.696369 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.798437 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.798530 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.798554 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.798586 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.798605 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.901970 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.902016 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.902028 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.902045 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.902057 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.005713 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.005781 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.005802 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.005827 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.005845 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.108809 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.108852 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.108863 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.108881 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.108894 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.211212 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.211279 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.211297 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.211361 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.211384 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.314472 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.314548 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.314567 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.314593 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.314612 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.351132 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.351214 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.351239 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.351270 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.351294 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.373518 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:22Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.378570 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.378624 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.378642 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.378662 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.378679 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.399388 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:22Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.404087 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.404153 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.404171 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.404198 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.404215 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.424418 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:22Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.429498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.429541 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.429559 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.429582 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.429598 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.449677 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:22Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.454944 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.455023 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.455047 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.455080 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.455106 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.475058 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:22Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.475292 5012 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.477537 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.477670 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.477698 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.477730 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.477752 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.580681 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.580732 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.580751 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.580777 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.580826 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.683093 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.683162 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.683185 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.683216 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.683244 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.684066 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:20:32.178058094 +0000 UTC Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.702694 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.702863 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.702954 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.703038 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.703593 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.703686 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.703829 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.703968 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.786252 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.786319 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.786332 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.786351 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.786364 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.888653 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.888722 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.888745 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.888774 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.888798 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.991351 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.991400 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.991411 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.991431 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.991443 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.093869 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.093932 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.093951 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.093976 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.093996 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.196995 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.197033 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.197041 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.197056 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.197068 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.302470 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.302506 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.302514 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.302528 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.302538 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.405277 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.405729 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.405901 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.406071 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.406215 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.508445 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.508509 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.508536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.508559 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.508576 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.611971 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.612026 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.612044 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.612069 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.612086 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.684218 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:17:36.128638914 +0000 UTC Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.714963 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.715008 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.715021 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.715049 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.715062 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.818217 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.818247 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.818256 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.818268 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.818279 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.920732 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.920806 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.920830 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.920856 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.920873 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.023398 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.023456 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.023474 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.023498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.023516 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.127020 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.127358 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.127545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.127725 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.127895 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.249856 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.249915 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.249932 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.249959 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.249977 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.352995 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.353050 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.353067 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.353089 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.353106 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.456020 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.456072 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.456089 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.456113 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.456130 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.559243 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.559341 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.559360 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.559381 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.559397 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.661821 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.662198 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.662377 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.662544 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.662680 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.685248 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:39:06.905668395 +0000 UTC Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.702606 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.702683 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:24 crc kubenswrapper[5012]: E0219 05:26:24.702984 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.703071 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.703076 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:24 crc kubenswrapper[5012]: E0219 05:26:24.703155 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:24 crc kubenswrapper[5012]: E0219 05:26:24.703257 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:24 crc kubenswrapper[5012]: E0219 05:26:24.703386 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.720786 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.744807 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.762196 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.766790 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.766928 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.766956 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.766986 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.767220 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.779276 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.802926 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.820803 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.835358 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.850589 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.868064 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.870655 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.870695 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.870711 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.870732 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.870747 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.898421 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:20Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI0219 05:26:19.894801 7304 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:26:19.895084 7304 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:26:19.895385 7304 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:26:19.895484 7304 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:26:19.895522 7304 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:26:19.895545 7304 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:26:19.895554 7304 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:26:19.895558 7304 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:26:19.895561 7304 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:26:19.895582 7304 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:26:19.895582 7304 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 05:26:19.895591 7304 factory.go:656] Stopping watch factory\\\\nI0219 05:26:19.895608 7304 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:26:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.915993 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.934378 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.947765 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.961989 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.975759 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.976649 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.976786 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.976911 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.977070 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.977273 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.991382 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.006081 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:13Z\\\",\\\"message\\\":\\\"2026-02-19T05:25:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905\\\\n2026-02-19T05:25:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905 to /host/opt/cni/bin/\\\\n2026-02-19T05:25:28Z [verbose] multus-daemon started\\\\n2026-02-19T05:25:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T05:26:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.080952 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.081004 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.081023 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.081047 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.081064 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.188693 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.188771 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.188786 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.188811 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.188832 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.291992 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.292030 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.292041 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.292059 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.292072 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.395928 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.395989 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.395999 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.396016 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.396047 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.500391 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.501018 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.501172 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.501284 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.506798 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.610245 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.610358 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.610380 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.610419 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.610441 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.686355 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:15:18.615173137 +0000 UTC Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.713514 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.713621 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.713682 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.713712 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.713771 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.816708 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.816786 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.816803 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.816832 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.816884 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.920886 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.920958 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.921008 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.921041 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.921064 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.024462 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.024542 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.024562 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.024592 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.024614 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.127540 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.127626 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.127645 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.127671 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.127691 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.232541 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.232599 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.232613 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.232634 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.232650 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.335802 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.336198 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.336209 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.336225 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.336236 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.439372 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.439434 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.439452 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.439480 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.439501 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.542613 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.542693 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.542719 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.542753 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.542775 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.645589 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.645634 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.645646 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.645664 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.645677 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.687131 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:16:59.248236989 +0000 UTC Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.702528 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.702591 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:26 crc kubenswrapper[5012]: E0219 05:26:26.702710 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.702741 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.702801 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:26 crc kubenswrapper[5012]: E0219 05:26:26.703523 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:26 crc kubenswrapper[5012]: E0219 05:26:26.703795 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:26 crc kubenswrapper[5012]: E0219 05:26:26.702968 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.748055 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.748110 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.748126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.748147 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.748165 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.851164 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.851220 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.851240 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.851266 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.851283 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.954362 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.954433 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.954451 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.954474 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.954493 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.056763 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.056822 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.056843 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.056870 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.056892 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.160738 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.160794 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.160810 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.160834 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.160851 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.264117 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.264247 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.264266 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.264292 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.264343 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.371295 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.371425 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.371452 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.371488 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.371510 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.475036 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.475111 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.475128 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.475156 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.475173 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.577994 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.578047 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.578065 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.578089 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.578108 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.680996 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.681049 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.681068 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.681092 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.681112 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.687677 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 00:19:14.299697104 +0000 UTC Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.784699 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.784780 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.784807 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.784840 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.784863 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.888529 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.888581 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.888600 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.888623 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.888640 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.991963 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.992012 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.992028 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.992051 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.992069 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.094815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.094946 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.095009 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.095032 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.095049 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.198363 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.198408 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.198425 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.198446 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.198462 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.301639 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.301717 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.301771 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.301805 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.301828 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.404411 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.404469 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.404485 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.404508 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.404526 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.506940 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.507015 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.507042 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.507073 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.507095 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.587268 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.587454 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.587424553 +0000 UTC m=+148.620747152 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.587501 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.587778 5012 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.587836 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.587821863 +0000 UTC m=+148.621144462 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.610413 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.610484 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.610508 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.610539 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.610564 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.688615 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:03:57.28881732 +0000 UTC Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.689198 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.689357 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689488 5012 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.689523 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689584 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689622 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689642 5012 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689604 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.689576673 +0000 UTC m=+148.722899272 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689720 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.689700886 +0000 UTC m=+148.723023485 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689764 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689802 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689826 5012 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689934 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.689904871 +0000 UTC m=+148.723227500 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.702076 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.702145 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.702228 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.702261 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.702282 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.702503 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.702686 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.702783 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.713076 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.713126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.713143 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.713167 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.713184 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.816682 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.816744 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.816762 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.816785 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.816803 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.919976 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.920177 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.920197 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.920224 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.920344 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.023992 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.024056 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.024075 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.024100 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.024118 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.127842 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.127898 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.127915 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.127945 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.127964 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.231017 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.231111 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.231134 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.231163 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.231184 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.335284 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.335376 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.335399 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.335425 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.335443 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.438101 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.438148 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.438165 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.438185 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.438205 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.541016 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.541075 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.541092 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.541117 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.541137 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.644815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.645452 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.645487 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.645510 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.645527 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.689567 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:42:09.263022418 +0000 UTC Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.748242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.748352 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.748373 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.748398 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.748415 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.850772 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.850826 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.850846 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.850870 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.850886 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.953583 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.953638 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.953655 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.953679 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.953718 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.056649 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.056699 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.056719 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.056754 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.056788 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.164439 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.164565 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.164666 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.164703 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.164726 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.268116 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.268176 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.268192 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.268218 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.268235 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.370958 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.371011 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.371027 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.371049 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.371065 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.474267 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.474382 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.474399 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.474425 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.474443 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.576510 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.576543 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.576551 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.576565 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.576576 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.679472 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.679498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.679506 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.679521 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.679531 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.690678 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:43:41.160750748 +0000 UTC Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.705670 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.705704 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:30 crc kubenswrapper[5012]: E0219 05:26:30.705846 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.705916 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.706030 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:30 crc kubenswrapper[5012]: E0219 05:26:30.706094 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:30 crc kubenswrapper[5012]: E0219 05:26:30.706281 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:30 crc kubenswrapper[5012]: E0219 05:26:30.706413 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.782609 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.782700 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.782723 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.782753 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.782776 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.886630 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.886687 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.886704 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.886727 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.886771 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.989780 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.989874 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.989897 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.989927 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.989951 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.093498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.093563 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.093581 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.093609 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.093628 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.197662 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.197919 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.197939 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.197972 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.197993 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.301532 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.301612 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.301640 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.301674 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.301697 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.407150 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.407240 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.407264 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.407295 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.407397 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.510518 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.510581 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.510595 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.510615 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.510632 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.613631 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.613673 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.613683 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.613699 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.613708 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.691743 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 15:05:34.989596351 +0000 UTC Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.716673 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.716805 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.716823 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.716851 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.716869 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.819955 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.820013 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.820031 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.820056 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.820075 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.922452 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.922762 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.922947 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.923100 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.923241 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.026365 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.026811 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.026945 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.027102 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.027271 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.130781 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.130841 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.130862 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.130890 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.130907 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.233694 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.233739 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.233758 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.233779 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.233797 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.336911 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.336973 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.336992 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.337019 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.337041 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.441265 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.441415 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.441434 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.441461 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.441480 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.544212 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.544620 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.544778 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.545004 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.545190 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.648446 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.648520 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.648543 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.648577 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.648600 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.674032 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.674086 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.674108 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.674136 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.674161 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.692485 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 17:42:12.703651755 +0000 UTC Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.696068 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.701274 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.701540 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.701678 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.701803 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.701841 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.702465 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.702666 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.701841 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.702862 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.701839 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.703057 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.703383 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.703675 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.727258 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.732978 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.733028 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.733044 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.733068 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.733142 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.752362 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.757771 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.757818 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.757835 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.757857 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.757872 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.777812 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.782779 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.782987 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.783154 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.783345 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.783495 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.802909 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.803179 5012 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.805643 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.805704 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.805723 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.805748 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.805770 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.908328 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.908722 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.908874 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.909056 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.909217 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.011720 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.011780 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.011798 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.011824 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.011844 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.115763 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.116022 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.116196 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.116377 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.116586 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.220331 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.220405 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.220428 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.220464 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.220490 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.323962 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.324105 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.324127 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.324152 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.324171 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.428252 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.428347 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.428370 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.428434 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.428454 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.530965 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.531031 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.531050 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.531076 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.531094 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.634421 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.634468 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.634483 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.634509 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.634526 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.692876 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 09:22:17.222442892 +0000 UTC Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.736887 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.736941 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.736957 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.736979 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.737031 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.840480 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.840526 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.840542 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.840566 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.840585 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.944135 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.944206 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.944222 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.944690 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.944754 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.048191 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.048245 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.048262 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.048289 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.048332 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.151426 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.151524 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.151548 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.151582 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.151611 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.255030 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.255089 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.255106 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.255131 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.255148 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.358855 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.358922 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.358943 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.358969 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.358989 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.462651 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.462743 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.462770 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.462804 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.462838 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.564792 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.564827 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.564834 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.564848 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.564857 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.666599 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.666633 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.666641 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.666654 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.666662 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.694268 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:54:32.972689843 +0000 UTC Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.702588 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.702729 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:34 crc kubenswrapper[5012]: E0219 05:26:34.702842 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.702928 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.702962 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:34 crc kubenswrapper[5012]: E0219 05:26:34.703085 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:34 crc kubenswrapper[5012]: E0219 05:26:34.703203 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:34 crc kubenswrapper[5012]: E0219 05:26:34.703318 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.754423 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.754404666 podStartE2EDuration="1m10.754404666s" podCreationTimestamp="2026-02-19 05:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:34.737787381 +0000 UTC m=+90.771109950" watchObservedRunningTime="2026-02-19 05:26:34.754404666 +0000 UTC m=+90.787727235" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.770295 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.770341 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.770350 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.770364 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.770376 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.819187 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" podStartSLOduration=68.819170211 podStartE2EDuration="1m8.819170211s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:34.818501114 +0000 UTC m=+90.851823683" watchObservedRunningTime="2026-02-19 05:26:34.819170211 +0000 UTC m=+90.852492780" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.848571 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" podStartSLOduration=68.848544772 podStartE2EDuration="1m8.848544772s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:34.834640026 +0000 UTC m=+90.867962635" watchObservedRunningTime="2026-02-19 05:26:34.848544772 +0000 UTC m=+90.881867381" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.849079 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-sh856" podStartSLOduration=68.849072205 podStartE2EDuration="1m8.849072205s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:34.8484695 +0000 UTC m=+90.881792119" watchObservedRunningTime="2026-02-19 05:26:34.849072205 +0000 UTC m=+90.882394804" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.868437 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=70.868389988 podStartE2EDuration="1m10.868389988s" podCreationTimestamp="2026-02-19 05:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:34.867707991 +0000 UTC m=+90.901030600" watchObservedRunningTime="2026-02-19 05:26:34.868389988 +0000 UTC m=+90.901712597" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.874023 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.874086 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.874106 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.874135 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.874154 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.937986 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.937874884 podStartE2EDuration="40.937874884s" podCreationTimestamp="2026-02-19 05:25:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:34.936297954 +0000 UTC m=+90.969620563" watchObservedRunningTime="2026-02-19 05:26:34.937874884 +0000 UTC m=+90.971197493" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.980856 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.980908 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.980923 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.980940 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.980958 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.991196 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4cs9h" podStartSLOduration=69.991137855 podStartE2EDuration="1m9.991137855s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:34.972007966 +0000 UTC m=+91.005330615" watchObservedRunningTime="2026-02-19 05:26:34.991137855 +0000 UTC m=+91.024460464" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.042197 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podStartSLOduration=70.042170539 podStartE2EDuration="1m10.042170539s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:35.026401036 +0000 UTC m=+91.059723635" watchObservedRunningTime="2026-02-19 05:26:35.042170539 +0000 UTC m=+91.075493148" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.042636 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lkrsg" podStartSLOduration=70.042628371 podStartE2EDuration="1m10.042628371s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:35.041767169 +0000 UTC m=+91.075089778" watchObservedRunningTime="2026-02-19 05:26:35.042628371 +0000 UTC m=+91.075950980" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.083789 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.083850 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.083867 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.083890 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.083911 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.186390 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.186448 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.186465 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.186490 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.186508 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.289611 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.289667 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.289684 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.289705 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.289722 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.392712 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.392754 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.392770 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.392792 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.392809 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.495784 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.495847 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.495866 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.495895 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.495919 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.597662 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.597715 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.597725 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.597744 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.597758 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.694473 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:23:44.142385043 +0000 UTC Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.700417 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.700453 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.700462 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.700480 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.700491 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.703900 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:26:35 crc kubenswrapper[5012]: E0219 05:26:35.704176 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.805992 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.807883 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.808083 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.808222 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.808376 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.911137 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.911207 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.911224 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.911252 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.911270 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.014030 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.014140 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.014163 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.014195 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.014218 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.117098 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.117149 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.117161 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.117179 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.117192 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.220655 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.220716 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.220732 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.220756 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.220780 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.324244 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.324356 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.324375 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.324400 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.324420 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.431871 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.432504 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.432542 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.432573 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.432598 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.535688 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.535742 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.535759 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.535783 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.535839 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.638244 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.638292 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.638343 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.638364 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.638381 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.695395 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:53:38.303301953 +0000 UTC Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.703634 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:36 crc kubenswrapper[5012]: E0219 05:26:36.703777 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.703787 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.703839 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.703851 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:36 crc kubenswrapper[5012]: E0219 05:26:36.703953 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:36 crc kubenswrapper[5012]: E0219 05:26:36.703968 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:36 crc kubenswrapper[5012]: E0219 05:26:36.704107 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.717188 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.741180 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.741235 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.741250 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.741266 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.741275 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.843836 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.843878 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.843888 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.843929 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.843941 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.947615 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.947677 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.947694 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.947718 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.947735 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.051009 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.051062 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.051080 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.051104 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.051122 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.154288 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.154368 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.154384 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.154408 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.154461 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.257010 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.257058 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.257076 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.257099 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.257116 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.360011 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.360078 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.360102 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.360134 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.360177 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.463252 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.463360 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.463385 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.463416 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.463437 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.566578 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.566650 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.566671 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.566699 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.566720 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.669664 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.669701 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.669711 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.669728 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.669740 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.695832 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 07:40:34.606787878 +0000 UTC Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.772503 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.772585 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.772613 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.772642 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.772665 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.875652 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.875699 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.875711 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.875728 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.875740 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.978765 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.978806 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.978815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.978830 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.978838 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.081796 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.081850 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.081865 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.081887 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.081905 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.184825 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.184890 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.184907 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.184932 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.184952 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.287493 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.287554 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.287571 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.287595 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.287612 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.390833 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.390895 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.390910 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.390936 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.390958 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.493734 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.493798 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.493816 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.493843 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.493861 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.596127 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.596179 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.596191 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.596214 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.596226 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.695956 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 05:50:02.119644347 +0000 UTC Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.699150 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.699226 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.699252 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.699282 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.699354 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.702725 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.702745 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.702768 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:38 crc kubenswrapper[5012]: E0219 05:26:38.702870 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:38 crc kubenswrapper[5012]: E0219 05:26:38.702983 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.702999 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:38 crc kubenswrapper[5012]: E0219 05:26:38.703056 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:38 crc kubenswrapper[5012]: E0219 05:26:38.703163 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.801763 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.801809 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.801821 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.801839 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.801851 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.905192 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.905253 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.905263 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.905276 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.905287 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.008768 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.008816 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.008827 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.008844 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.008856 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.111647 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.111681 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.111689 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.111704 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.111713 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.214085 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.214133 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.214145 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.214163 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.214176 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.316964 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.316998 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.317006 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.317018 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.317027 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.418282 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.418387 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.418416 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.418439 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.418456 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.521585 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.521654 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.521680 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.521707 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.521725 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.625069 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.625119 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.625130 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.625148 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.625158 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.697071 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:04:07.201982104 +0000 UTC Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.727749 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.727822 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.727841 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.727863 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.727879 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.830475 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.830535 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.830559 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.830587 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.830611 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.932836 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.932919 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.932935 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.932957 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.932974 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.035461 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.035508 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.035524 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.035545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.035562 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.138444 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.138484 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.138502 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.138523 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.138540 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.241295 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.241368 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.241385 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.241406 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.241423 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.344250 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.344333 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.344357 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.344387 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.344404 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.446945 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.446997 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.447009 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.447026 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.447037 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.551095 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.551150 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.551167 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.551195 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.551215 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.653536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.653597 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.653615 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.653639 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.653657 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.697515 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:11:24.266614667 +0000 UTC Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.701912 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.702021 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:40 crc kubenswrapper[5012]: E0219 05:26:40.702143 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.702179 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.702197 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:40 crc kubenswrapper[5012]: E0219 05:26:40.702410 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:40 crc kubenswrapper[5012]: E0219 05:26:40.702449 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:40 crc kubenswrapper[5012]: E0219 05:26:40.702571 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.757159 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.757223 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.757240 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.757265 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.757282 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.860943 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.860995 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.861006 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.861040 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.861052 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.963694 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.963745 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.963758 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.963775 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.963787 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.066458 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.066505 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.066516 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.066531 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.066541 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.170212 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.170272 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.170289 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.170344 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.170364 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.273448 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.273514 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.273528 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.273549 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.273565 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.376652 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.376720 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.376742 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.376771 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.376793 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.479386 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.479433 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.479443 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.479461 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.479473 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.582253 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.582355 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.582381 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.582405 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.582421 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.686413 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.686489 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.686509 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.686534 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.686553 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.698630 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:15:01.426252536 +0000 UTC Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.789867 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.789978 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.790003 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.790034 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.790057 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.892737 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.892786 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.892798 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.892816 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.892828 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.995296 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.995383 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.995401 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.995425 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.995442 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.098842 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.098899 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.098917 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.098943 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.098962 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.201604 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.201675 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.201700 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.201731 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.201758 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.305626 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.305699 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.305717 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.305744 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.305763 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.412919 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.412994 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.413051 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.413072 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.413087 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.516023 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.516076 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.516094 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.516115 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.516131 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.618561 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.618624 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.618641 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.618664 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.618712 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.698888 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 18:30:17.631826358 +0000 UTC Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.702371 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.702451 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:42 crc kubenswrapper[5012]: E0219 05:26:42.702521 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:42 crc kubenswrapper[5012]: E0219 05:26:42.702612 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.702454 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.702692 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:42 crc kubenswrapper[5012]: E0219 05:26:42.702837 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:42 crc kubenswrapper[5012]: E0219 05:26:42.702929 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.721183 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.721453 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.721590 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.721730 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.721863 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.823962 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.824008 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.824025 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.824045 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.824061 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.926256 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.926514 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.926656 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.926817 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.926940 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.029795 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.029852 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.029870 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.029894 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.029912 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:43Z","lastTransitionTime":"2026-02-19T05:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.037279 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.037373 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.037398 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.037430 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.037457 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:43Z","lastTransitionTime":"2026-02-19T05:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.102377 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t"] Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.102943 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.105649 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.106078 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.106425 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.106565 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.142411 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=7.1423765790000004 podStartE2EDuration="7.142376579s" podCreationTimestamp="2026-02-19 05:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:43.12242083 +0000 UTC m=+99.155743429" watchObservedRunningTime="2026-02-19 05:26:43.142376579 +0000 UTC m=+99.175699188" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.259667 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.260163 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.260433 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.260645 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.260964 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.361530 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.361796 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.362088 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.362385 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.362653 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.361662 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.362236 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.363873 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.372184 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.392127 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.426546 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: W0219 05:26:43.448319 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9d33f25_d1bf_4118_b7f1_998bcd6eb548.slice/crio-a226133877922b03dd5678c89d1c8cc750b4a83353ad7c5a8a4e9429d0367f51 WatchSource:0}: Error finding container a226133877922b03dd5678c89d1c8cc750b4a83353ad7c5a8a4e9429d0367f51: Status 404 returned error can't find the container with id a226133877922b03dd5678c89d1c8cc750b4a83353ad7c5a8a4e9429d0367f51 Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.699761 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:36:13.329270916 +0000 UTC Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.700542 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.711338 5012 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 05:26:44 crc kubenswrapper[5012]: I0219 05:26:44.430659 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" event={"ID":"f9d33f25-d1bf-4118-b7f1-998bcd6eb548","Type":"ContainerStarted","Data":"d53c173b9107f3ca791defd755a07197cda9ce15693ddeb15e24fad36dee93c3"} Feb 19 05:26:44 crc kubenswrapper[5012]: I0219 05:26:44.430715 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" event={"ID":"f9d33f25-d1bf-4118-b7f1-998bcd6eb548","Type":"ContainerStarted","Data":"a226133877922b03dd5678c89d1c8cc750b4a83353ad7c5a8a4e9429d0367f51"} Feb 19 05:26:44 crc kubenswrapper[5012]: I0219 05:26:44.474288 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:44 crc kubenswrapper[5012]: E0219 05:26:44.474487 5012 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:26:44 crc kubenswrapper[5012]: E0219 05:26:44.474582 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs podName:2e231950-a365-4a82-9481-05fdac171449 nodeName:}" failed. No retries permitted until 2026-02-19 05:27:48.474558018 +0000 UTC m=+164.507880627 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs") pod "network-metrics-daemon-q5cb2" (UID: "2e231950-a365-4a82-9481-05fdac171449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:26:44 crc kubenswrapper[5012]: I0219 05:26:44.702592 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:44 crc kubenswrapper[5012]: I0219 05:26:44.702653 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:44 crc kubenswrapper[5012]: I0219 05:26:44.702741 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:44 crc kubenswrapper[5012]: E0219 05:26:44.702918 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:44 crc kubenswrapper[5012]: I0219 05:26:44.703516 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:44 crc kubenswrapper[5012]: E0219 05:26:44.704954 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:44 crc kubenswrapper[5012]: E0219 05:26:44.705123 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:44 crc kubenswrapper[5012]: E0219 05:26:44.705125 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:45 crc kubenswrapper[5012]: I0219 05:26:45.722599 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" podStartSLOduration=80.722566646 podStartE2EDuration="1m20.722566646s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:44.454875715 +0000 UTC m=+100.488198324" watchObservedRunningTime="2026-02-19 05:26:45.722566646 +0000 UTC m=+101.755889255" Feb 19 05:26:45 crc kubenswrapper[5012]: I0219 05:26:45.724111 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 19 05:26:46 crc kubenswrapper[5012]: I0219 05:26:46.702230 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:46 crc kubenswrapper[5012]: I0219 05:26:46.702418 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:46 crc kubenswrapper[5012]: I0219 05:26:46.702566 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:46 crc kubenswrapper[5012]: I0219 05:26:46.702621 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:46 crc kubenswrapper[5012]: E0219 05:26:46.702652 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:46 crc kubenswrapper[5012]: E0219 05:26:46.702874 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:46 crc kubenswrapper[5012]: E0219 05:26:46.703006 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:46 crc kubenswrapper[5012]: E0219 05:26:46.703451 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:47 crc kubenswrapper[5012]: I0219 05:26:47.704123 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:26:47 crc kubenswrapper[5012]: E0219 05:26:47.704719 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:26:48 crc kubenswrapper[5012]: I0219 05:26:48.702803 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:48 crc kubenswrapper[5012]: I0219 05:26:48.702881 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:48 crc kubenswrapper[5012]: I0219 05:26:48.702938 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:48 crc kubenswrapper[5012]: E0219 05:26:48.703837 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:48 crc kubenswrapper[5012]: E0219 05:26:48.703967 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:48 crc kubenswrapper[5012]: E0219 05:26:48.704174 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:48 crc kubenswrapper[5012]: I0219 05:26:48.704395 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:48 crc kubenswrapper[5012]: E0219 05:26:48.704615 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:50 crc kubenswrapper[5012]: I0219 05:26:50.702438 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:50 crc kubenswrapper[5012]: I0219 05:26:50.702487 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:50 crc kubenswrapper[5012]: E0219 05:26:50.702602 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:50 crc kubenswrapper[5012]: I0219 05:26:50.702631 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:50 crc kubenswrapper[5012]: I0219 05:26:50.702685 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:50 crc kubenswrapper[5012]: E0219 05:26:50.702777 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:50 crc kubenswrapper[5012]: E0219 05:26:50.702843 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:50 crc kubenswrapper[5012]: E0219 05:26:50.702914 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:52 crc kubenswrapper[5012]: I0219 05:26:52.701870 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:52 crc kubenswrapper[5012]: I0219 05:26:52.701912 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:52 crc kubenswrapper[5012]: I0219 05:26:52.701982 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:52 crc kubenswrapper[5012]: I0219 05:26:52.701987 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:52 crc kubenswrapper[5012]: E0219 05:26:52.702363 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:52 crc kubenswrapper[5012]: E0219 05:26:52.702745 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:52 crc kubenswrapper[5012]: E0219 05:26:52.702526 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:52 crc kubenswrapper[5012]: E0219 05:26:52.702881 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:54 crc kubenswrapper[5012]: I0219 05:26:54.702076 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:54 crc kubenswrapper[5012]: I0219 05:26:54.702134 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:54 crc kubenswrapper[5012]: I0219 05:26:54.702087 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:54 crc kubenswrapper[5012]: I0219 05:26:54.702221 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:54 crc kubenswrapper[5012]: E0219 05:26:54.702229 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:54 crc kubenswrapper[5012]: E0219 05:26:54.704526 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:54 crc kubenswrapper[5012]: E0219 05:26:54.704893 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:54 crc kubenswrapper[5012]: E0219 05:26:54.704994 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:54 crc kubenswrapper[5012]: I0219 05:26:54.756268 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=9.756237737 podStartE2EDuration="9.756237737s" podCreationTimestamp="2026-02-19 05:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:54.745931824 +0000 UTC m=+110.779254433" watchObservedRunningTime="2026-02-19 05:26:54.756237737 +0000 UTC m=+110.789560336" Feb 19 05:26:56 crc kubenswrapper[5012]: I0219 05:26:56.701876 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:56 crc kubenswrapper[5012]: I0219 05:26:56.701939 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:56 crc kubenswrapper[5012]: E0219 05:26:56.702057 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:56 crc kubenswrapper[5012]: I0219 05:26:56.702147 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:56 crc kubenswrapper[5012]: I0219 05:26:56.702172 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:56 crc kubenswrapper[5012]: E0219 05:26:56.702390 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:56 crc kubenswrapper[5012]: E0219 05:26:56.702549 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:56 crc kubenswrapper[5012]: E0219 05:26:56.702669 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:58 crc kubenswrapper[5012]: I0219 05:26:58.702448 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:58 crc kubenswrapper[5012]: I0219 05:26:58.702573 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:58 crc kubenswrapper[5012]: I0219 05:26:58.702655 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:58 crc kubenswrapper[5012]: I0219 05:26:58.702859 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:58 crc kubenswrapper[5012]: E0219 05:26:58.702834 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:58 crc kubenswrapper[5012]: E0219 05:26:58.703064 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:58 crc kubenswrapper[5012]: E0219 05:26:58.703340 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:58 crc kubenswrapper[5012]: E0219 05:26:58.703444 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:59 crc kubenswrapper[5012]: I0219 05:26:59.703062 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:26:59 crc kubenswrapper[5012]: E0219 05:26:59.703283 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.487976 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/1.log" Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.489253 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/0.log" Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.489350 5012 generic.go:334] "Generic (PLEG): container finished" podID="e7a04e36-fbaa-4de1-871a-7225433eebb0" containerID="fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd" exitCode=1 Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.489392 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lkrsg" event={"ID":"e7a04e36-fbaa-4de1-871a-7225433eebb0","Type":"ContainerDied","Data":"fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd"} Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.489438 5012 scope.go:117] "RemoveContainer" containerID="10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061" Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.490018 5012 scope.go:117] "RemoveContainer" containerID="fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd" Feb 19 05:27:00 crc kubenswrapper[5012]: E0219 05:27:00.490443 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lkrsg_openshift-multus(e7a04e36-fbaa-4de1-871a-7225433eebb0)\"" pod="openshift-multus/multus-lkrsg" podUID="e7a04e36-fbaa-4de1-871a-7225433eebb0" Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.702421 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.702655 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.702700 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:00 crc kubenswrapper[5012]: E0219 05:27:00.702811 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.702866 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:00 crc kubenswrapper[5012]: E0219 05:27:00.703037 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:00 crc kubenswrapper[5012]: E0219 05:27:00.703165 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:00 crc kubenswrapper[5012]: E0219 05:27:00.703221 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:01 crc kubenswrapper[5012]: I0219 05:27:01.494946 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/1.log" Feb 19 05:27:02 crc kubenswrapper[5012]: I0219 05:27:02.702429 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:02 crc kubenswrapper[5012]: I0219 05:27:02.702458 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:02 crc kubenswrapper[5012]: E0219 05:27:02.703036 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:02 crc kubenswrapper[5012]: I0219 05:27:02.702534 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:02 crc kubenswrapper[5012]: I0219 05:27:02.702473 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:02 crc kubenswrapper[5012]: E0219 05:27:02.703164 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:02 crc kubenswrapper[5012]: E0219 05:27:02.703467 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:02 crc kubenswrapper[5012]: E0219 05:27:02.703698 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:04 crc kubenswrapper[5012]: E0219 05:27:04.646867 5012 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 19 05:27:04 crc kubenswrapper[5012]: I0219 05:27:04.702005 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:04 crc kubenswrapper[5012]: I0219 05:27:04.702088 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:04 crc kubenswrapper[5012]: I0219 05:27:04.702155 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:04 crc kubenswrapper[5012]: E0219 05:27:04.703808 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:04 crc kubenswrapper[5012]: I0219 05:27:04.703843 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:04 crc kubenswrapper[5012]: E0219 05:27:04.704031 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:04 crc kubenswrapper[5012]: E0219 05:27:04.704171 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:04 crc kubenswrapper[5012]: E0219 05:27:04.704510 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:04 crc kubenswrapper[5012]: E0219 05:27:04.833179 5012 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 05:27:06 crc kubenswrapper[5012]: I0219 05:27:06.702258 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:06 crc kubenswrapper[5012]: I0219 05:27:06.702435 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:06 crc kubenswrapper[5012]: E0219 05:27:06.703557 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:06 crc kubenswrapper[5012]: I0219 05:27:06.702667 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:06 crc kubenswrapper[5012]: E0219 05:27:06.703638 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:06 crc kubenswrapper[5012]: I0219 05:27:06.702514 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:06 crc kubenswrapper[5012]: E0219 05:27:06.703730 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:06 crc kubenswrapper[5012]: E0219 05:27:06.703801 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:08 crc kubenswrapper[5012]: I0219 05:27:08.701986 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:08 crc kubenswrapper[5012]: I0219 05:27:08.702017 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:08 crc kubenswrapper[5012]: I0219 05:27:08.702159 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:08 crc kubenswrapper[5012]: E0219 05:27:08.702263 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:08 crc kubenswrapper[5012]: I0219 05:27:08.702282 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:08 crc kubenswrapper[5012]: E0219 05:27:08.702456 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:08 crc kubenswrapper[5012]: E0219 05:27:08.702655 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:08 crc kubenswrapper[5012]: E0219 05:27:08.702736 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:09 crc kubenswrapper[5012]: E0219 05:27:09.835177 5012 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 05:27:10 crc kubenswrapper[5012]: I0219 05:27:10.702167 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:10 crc kubenswrapper[5012]: I0219 05:27:10.702282 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:10 crc kubenswrapper[5012]: E0219 05:27:10.702435 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:10 crc kubenswrapper[5012]: I0219 05:27:10.702504 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:10 crc kubenswrapper[5012]: I0219 05:27:10.702570 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:10 crc kubenswrapper[5012]: E0219 05:27:10.702662 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:10 crc kubenswrapper[5012]: E0219 05:27:10.702868 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:10 crc kubenswrapper[5012]: E0219 05:27:10.703058 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:11 crc kubenswrapper[5012]: I0219 05:27:11.703217 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.536106 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/3.log" Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.541114 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.541703 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.585870 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podStartSLOduration=106.585849484 podStartE2EDuration="1m46.585849484s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:12.585070744 +0000 UTC m=+128.618393343" watchObservedRunningTime="2026-02-19 05:27:12.585849484 +0000 UTC m=+128.619172083" Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.651676 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-q5cb2"] Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.651855 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:12 crc kubenswrapper[5012]: E0219 05:27:12.651992 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.702963 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.703032 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.703050 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:12 crc kubenswrapper[5012]: E0219 05:27:12.703159 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:12 crc kubenswrapper[5012]: E0219 05:27:12.703326 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:12 crc kubenswrapper[5012]: E0219 05:27:12.703406 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:14 crc kubenswrapper[5012]: I0219 05:27:14.702092 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:14 crc kubenswrapper[5012]: I0219 05:27:14.702128 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:14 crc kubenswrapper[5012]: I0219 05:27:14.702107 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:14 crc kubenswrapper[5012]: E0219 05:27:14.703963 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:14 crc kubenswrapper[5012]: I0219 05:27:14.704008 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:14 crc kubenswrapper[5012]: E0219 05:27:14.704140 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:14 crc kubenswrapper[5012]: E0219 05:27:14.704257 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:14 crc kubenswrapper[5012]: E0219 05:27:14.704396 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:14 crc kubenswrapper[5012]: I0219 05:27:14.704949 5012 scope.go:117] "RemoveContainer" containerID="fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd" Feb 19 05:27:14 crc kubenswrapper[5012]: E0219 05:27:14.835660 5012 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 05:27:15 crc kubenswrapper[5012]: I0219 05:27:15.552829 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/1.log" Feb 19 05:27:15 crc kubenswrapper[5012]: I0219 05:27:15.552909 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lkrsg" event={"ID":"e7a04e36-fbaa-4de1-871a-7225433eebb0","Type":"ContainerStarted","Data":"9dee99959c58361002b098beb811940fb74ac9f7c81b432ebe5142128b4aec05"} Feb 19 05:27:16 crc kubenswrapper[5012]: I0219 05:27:16.702694 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:16 crc kubenswrapper[5012]: I0219 05:27:16.702747 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:16 crc kubenswrapper[5012]: I0219 05:27:16.702691 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:16 crc kubenswrapper[5012]: E0219 05:27:16.702874 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:16 crc kubenswrapper[5012]: E0219 05:27:16.702996 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:16 crc kubenswrapper[5012]: I0219 05:27:16.703060 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:16 crc kubenswrapper[5012]: E0219 05:27:16.703100 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:16 crc kubenswrapper[5012]: E0219 05:27:16.703240 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:18 crc kubenswrapper[5012]: I0219 05:27:18.701985 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:18 crc kubenswrapper[5012]: I0219 05:27:18.702258 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:18 crc kubenswrapper[5012]: E0219 05:27:18.702620 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:18 crc kubenswrapper[5012]: I0219 05:27:18.702329 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:18 crc kubenswrapper[5012]: I0219 05:27:18.702286 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:18 crc kubenswrapper[5012]: E0219 05:27:18.702728 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:18 crc kubenswrapper[5012]: E0219 05:27:18.702794 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:18 crc kubenswrapper[5012]: E0219 05:27:18.702967 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.701847 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.702422 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.702470 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.703056 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.705622 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.705902 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.706286 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.706565 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.706922 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.708163 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.810360 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.863960 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hjmb9"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.864779 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.869394 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.869987 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.870002 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6qvzq"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.870907 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.873029 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.873367 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.873541 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.872872 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ntrlp"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.874404 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.874483 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.874958 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.874578 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.874595 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.875193 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.879406 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-thnmn"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.880144 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.885069 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.886143 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.887918 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.890190 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.906895 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.907190 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.907744 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.907783 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.908018 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.908347 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.908676 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.909220 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.906112 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.913132 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.920510 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.920957 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.922112 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.926564 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.950536 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.952682 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.953320 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956232 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956412 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956522 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956581 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956644 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956769 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956809 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956888 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.957021 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.957165 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.957320 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.957423 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956232 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956771 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.957643 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.957576 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.957739 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5lz5f"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.957878 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958045 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958213 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958353 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958424 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958462 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958477 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958582 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958694 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958716 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958865 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958882 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959000 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959128 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959167 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959176 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-serving-cert\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959204 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-config\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959226 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-config\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959247 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-config\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959263 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d02c79-2b95-4c7a-ae75-f366d40fe558-serving-cert\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959282 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-image-import-ca\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959317 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-client-ca\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959333 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959337 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959514 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-etcd-client\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959586 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h7v5\" (UniqueName: \"kubernetes.io/projected/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-kube-api-access-8h7v5\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959619 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-machine-approver-tls\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959651 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-config\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959681 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89f1d0f3-c220-4668-b822-3b20b64ebfb8-serving-cert\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959711 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-config\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959737 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af251e39-e77d-4cf8-a359-02645dc98b38-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959765 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5vpz\" (UniqueName: \"kubernetes.io/projected/7e9dd710-d0ec-443f-a081-b18c4b6abe36-kube-api-access-q5vpz\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959795 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959805 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959894 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-etcd-client\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959943 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-etcd-serving-ca\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959969 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-images\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959992 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-audit-dir\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960013 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297z9\" (UniqueName: \"kubernetes.io/projected/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-kube-api-access-297z9\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960035 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e9dd710-d0ec-443f-a081-b18c4b6abe36-serving-cert\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960055 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgg97\" (UniqueName: \"kubernetes.io/projected/89f1d0f3-c220-4668-b822-3b20b64ebfb8-kube-api-access-fgg97\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960117 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960147 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-client-ca\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960171 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-node-pullsecrets\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960192 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/462e6b9c-5e51-439d-aee8-9e7651b8c35a-audit-dir\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960216 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960250 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spjjm\" (UniqueName: \"kubernetes.io/projected/462e6b9c-5e51-439d-aee8-9e7651b8c35a-kube-api-access-spjjm\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960285 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960333 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxdl2\" (UniqueName: \"kubernetes.io/projected/af251e39-e77d-4cf8-a359-02645dc98b38-kube-api-access-cxdl2\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960357 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-service-ca-bundle\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960383 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960407 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hctmj\" (UniqueName: \"kubernetes.io/projected/e4d02c79-2b95-4c7a-ae75-f366d40fe558-kube-api-access-hctmj\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960442 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-audit\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960464 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-audit-policies\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960485 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-serving-cert\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960504 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-encryption-config\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960527 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-auth-proxy-config\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960552 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-config\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960575 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-encryption-config\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960609 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjbsx\" (UniqueName: \"kubernetes.io/projected/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-kube-api-access-qjbsx\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960629 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af251e39-e77d-4cf8-a359-02645dc98b38-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.966622 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.966865 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.967020 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.969415 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mmvm"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.970467 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.970491 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.971245 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ljzsp"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.973997 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.975426 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.975438 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.975506 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.975908 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.976643 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.977526 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.981344 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.982093 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.986320 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.986833 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.987347 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tjxj6"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.989685 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.989854 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tjxj6" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.990367 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-twxgh"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.991035 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.991630 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.992136 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.993408 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-mlxbg"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.993901 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.994256 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.994868 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.997315 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.997865 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.998419 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xphkg"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:23.999048 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.000433 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.000521 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.014535 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ntrlp"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.014581 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.015093 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.021490 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.030678 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.031895 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.032177 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.039797 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.040694 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.048941 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075001 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075234 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-client-ca\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075337 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-node-pullsecrets\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075418 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/462e6b9c-5e51-439d-aee8-9e7651b8c35a-audit-dir\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075518 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075593 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spjjm\" (UniqueName: \"kubernetes.io/projected/462e6b9c-5e51-439d-aee8-9e7651b8c35a-kube-api-access-spjjm\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075667 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075744 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxdl2\" (UniqueName: \"kubernetes.io/projected/af251e39-e77d-4cf8-a359-02645dc98b38-kube-api-access-cxdl2\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075878 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-service-ca-bundle\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075952 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076020 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hctmj\" (UniqueName: \"kubernetes.io/projected/e4d02c79-2b95-4c7a-ae75-f366d40fe558-kube-api-access-hctmj\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076096 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-audit\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076161 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-audit-policies\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076230 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-serving-cert\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076318 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-encryption-config\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076397 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-auth-proxy-config\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076470 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-config\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076533 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-encryption-config\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076604 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjbsx\" (UniqueName: \"kubernetes.io/projected/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-kube-api-access-qjbsx\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076676 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af251e39-e77d-4cf8-a359-02645dc98b38-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076790 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-serving-cert\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076954 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-config\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077015 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-config\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077042 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-config\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077066 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d02c79-2b95-4c7a-ae75-f366d40fe558-serving-cert\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077093 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-image-import-ca\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077115 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-client-ca\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077140 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077163 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-etcd-client\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077212 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h7v5\" (UniqueName: \"kubernetes.io/projected/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-kube-api-access-8h7v5\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077237 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-machine-approver-tls\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077257 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-config\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077278 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89f1d0f3-c220-4668-b822-3b20b64ebfb8-serving-cert\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077319 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-config\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077345 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af251e39-e77d-4cf8-a359-02645dc98b38-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077370 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5vpz\" (UniqueName: \"kubernetes.io/projected/7e9dd710-d0ec-443f-a081-b18c4b6abe36-kube-api-access-q5vpz\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077402 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077426 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-etcd-client\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077454 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-etcd-serving-ca\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077475 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-images\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077498 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-audit-dir\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077522 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297z9\" (UniqueName: \"kubernetes.io/projected/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-kube-api-access-297z9\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077549 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e9dd710-d0ec-443f-a081-b18c4b6abe36-serving-cert\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077572 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgg97\" (UniqueName: \"kubernetes.io/projected/89f1d0f3-c220-4668-b822-3b20b64ebfb8-kube-api-access-fgg97\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077644 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-service-ca-bundle\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077662 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-client-ca\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077718 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-node-pullsecrets\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077750 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/462e6b9c-5e51-439d-aee8-9e7651b8c35a-audit-dir\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077750 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.078447 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.078533 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-config\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.078716 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.079163 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-audit-dir\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.079413 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-config\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.079939 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-images\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.080417 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.080617 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-client-ca\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.080819 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.081164 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.082487 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-etcd-serving-ca\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.083106 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-auth-proxy-config\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.083248 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.083564 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.083728 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-audit\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.080292 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-config\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.084485 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-config\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076410 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.087829 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-config\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.089147 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-audit-policies\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.090407 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af251e39-e77d-4cf8-a359-02645dc98b38-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.091582 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.091710 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.091706 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-config\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.091889 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.091995 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.092100 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.094110 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-machine-approver-tls\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.094400 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076435 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076487 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076686 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076816 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077992 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.095986 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-image-import-ca\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.078673 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.078850 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.078924 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.079027 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.079155 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.103944 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.104123 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.104205 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.104351 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.104465 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105267 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-encryption-config\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105446 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105539 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105663 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105763 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105850 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105954 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.106037 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105499 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.106183 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105666 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.106284 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.106344 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.106457 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.106558 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.107397 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xfb4j"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.109434 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af251e39-e77d-4cf8-a359-02645dc98b38-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.109912 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89f1d0f3-c220-4668-b822-3b20b64ebfb8-serving-cert\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.111008 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.111535 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.112004 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.112315 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.112827 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.113604 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-serving-cert\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.113636 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.113656 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.113935 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e9dd710-d0ec-443f-a081-b18c4b6abe36-serving-cert\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.114431 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.114437 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.114455 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-serving-cert\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.115287 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.115375 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.115576 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.116822 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.117173 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.117430 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.119164 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.129998 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d02c79-2b95-4c7a-ae75-f366d40fe558-serving-cert\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.130109 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwd8z"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.130900 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.131555 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.131804 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-etcd-client\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.132048 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-etcd-client\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.132371 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.134955 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.136117 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.137280 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-encryption-config\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.137289 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.137712 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.140901 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gv2pd"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.149832 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.152382 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.152618 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-7q9qf"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.154089 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7q9qf" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.156694 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.157499 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.157774 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gwx52"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.158738 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.159487 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-thnmn"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.161190 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.167823 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.168915 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6qvzq"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.169847 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.170857 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.171503 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.172454 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.172888 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ljzsp"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.173925 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tjxj6"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.174991 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mmvm"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.175990 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.176958 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.177974 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.178749 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-trusted-ca-bundle\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.178843 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/726872cb-1000-4656-beea-2bd59752199c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.178866 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1387b34e-3233-49a1-9e37-ef1e7f4fb660-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.178889 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-oauth-config\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.178908 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/726872cb-1000-4656-beea-2bd59752199c-proxy-tls\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.178953 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-serving-cert\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.178990 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78dedde0-cb75-4ee7-8735-e6f071a02b10-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.178991 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5lz5f"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179103 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzxsb\" (UniqueName: \"kubernetes.io/projected/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-kube-api-access-dzxsb\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179160 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-service-ca\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179178 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-config\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179197 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1387b34e-3233-49a1-9e37-ef1e7f4fb660-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179239 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw9hx\" (UniqueName: \"kubernetes.io/projected/726872cb-1000-4656-beea-2bd59752199c-kube-api-access-dw9hx\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179257 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktsw8\" (UniqueName: \"kubernetes.io/projected/78dedde0-cb75-4ee7-8735-e6f071a02b10-kube-api-access-ktsw8\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179281 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-oauth-serving-cert\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179311 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78dedde0-cb75-4ee7-8735-e6f071a02b10-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179361 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1387b34e-3233-49a1-9e37-ef1e7f4fb660-config\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.180049 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.181389 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.182165 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.183167 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-x2l69"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.183992 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.184197 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n8t75"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.185547 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.185672 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.186241 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.186328 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.188123 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.188164 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-twxgh"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.190069 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gwx52"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.190212 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.192082 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.193480 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.195119 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mlxbg"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.196640 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.197732 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n8t75"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.198950 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xfb4j"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.200050 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.202031 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.203207 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.204334 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.205467 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7q9qf"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.206574 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.206896 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwd8z"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.207893 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hjmb9"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.209032 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x2l69"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.209949 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gv2pd"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.210848 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-znn5k"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.211407 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.227123 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.246645 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.277618 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.279937 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzxsb\" (UniqueName: \"kubernetes.io/projected/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-kube-api-access-dzxsb\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280018 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-service-ca\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280057 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-config\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280091 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1387b34e-3233-49a1-9e37-ef1e7f4fb660-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280126 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw9hx\" (UniqueName: \"kubernetes.io/projected/726872cb-1000-4656-beea-2bd59752199c-kube-api-access-dw9hx\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280155 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktsw8\" (UniqueName: \"kubernetes.io/projected/78dedde0-cb75-4ee7-8735-e6f071a02b10-kube-api-access-ktsw8\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280191 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-oauth-serving-cert\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280216 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78dedde0-cb75-4ee7-8735-e6f071a02b10-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280245 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1387b34e-3233-49a1-9e37-ef1e7f4fb660-config\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280283 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-trusted-ca-bundle\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280342 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-oauth-config\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280367 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/726872cb-1000-4656-beea-2bd59752199c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280393 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1387b34e-3233-49a1-9e37-ef1e7f4fb660-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280426 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/726872cb-1000-4656-beea-2bd59752199c-proxy-tls\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280453 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-serving-cert\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280488 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78dedde0-cb75-4ee7-8735-e6f071a02b10-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280935 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-service-ca\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.281550 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/726872cb-1000-4656-beea-2bd59752199c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.281560 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-trusted-ca-bundle\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.281693 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1387b34e-3233-49a1-9e37-ef1e7f4fb660-config\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.284257 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-oauth-config\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.284593 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1387b34e-3233-49a1-9e37-ef1e7f4fb660-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.287126 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-serving-cert\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.287313 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.291347 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-oauth-serving-cert\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.307358 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.311426 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-config\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.327101 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.347004 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.368155 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.387096 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.406859 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.431450 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.448490 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.455724 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/726872cb-1000-4656-beea-2bd59752199c-proxy-tls\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.467714 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.487508 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.508733 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.527613 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.547489 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.568136 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.589471 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.610051 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.616767 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78dedde0-cb75-4ee7-8735-e6f071a02b10-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.627943 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.631778 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78dedde0-cb75-4ee7-8735-e6f071a02b10-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.647861 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.668252 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.727219 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.729384 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.730382 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.747980 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.768403 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.787894 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.808015 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.829514 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.847262 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.868742 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.887923 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.908562 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.928732 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.947585 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.967377 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.035673 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spjjm\" (UniqueName: \"kubernetes.io/projected/462e6b9c-5e51-439d-aee8-9e7651b8c35a-kube-api-access-spjjm\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.053618 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hctmj\" (UniqueName: \"kubernetes.io/projected/e4d02c79-2b95-4c7a-ae75-f366d40fe558-kube-api-access-hctmj\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.073583 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxdl2\" (UniqueName: \"kubernetes.io/projected/af251e39-e77d-4cf8-a359-02645dc98b38-kube-api-access-cxdl2\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.086144 5012 request.go:700] Waited for 1.00699013s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.094143 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgg97\" (UniqueName: \"kubernetes.io/projected/89f1d0f3-c220-4668-b822-3b20b64ebfb8-kube-api-access-fgg97\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.114153 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5vpz\" (UniqueName: \"kubernetes.io/projected/7e9dd710-d0ec-443f-a081-b18c4b6abe36-kube-api-access-q5vpz\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.124687 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.134418 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297z9\" (UniqueName: \"kubernetes.io/projected/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-kube-api-access-297z9\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.148084 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.158857 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h7v5\" (UniqueName: \"kubernetes.io/projected/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-kube-api-access-8h7v5\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.162827 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.168088 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.178042 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.188343 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.197962 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.215937 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.240486 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjbsx\" (UniqueName: \"kubernetes.io/projected/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-kube-api-access-qjbsx\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.248144 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.267925 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.284094 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.287345 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.289123 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.309295 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.327551 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.348751 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.368250 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.389459 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.398579 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.409717 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.423835 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-thnmn"] Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.427141 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.449511 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.467908 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.487252 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.507394 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.531731 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.547538 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.567926 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.587124 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.596355 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" event={"ID":"5bb4ce13-477c-4c8d-89b5-0d6cc099095c","Type":"ContainerStarted","Data":"6eab049c47c31bf2d41a3ab4fe756097cb325ba1a85dbf667dc4e2937f242d63"} Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.596418 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" event={"ID":"5bb4ce13-477c-4c8d-89b5-0d6cc099095c","Type":"ContainerStarted","Data":"e9fe810e37e6df787ab52ad04ca8ce08181039e71290e7fdd7f3db3f700cdffb"} Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.601116 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" event={"ID":"e4d02c79-2b95-4c7a-ae75-f366d40fe558","Type":"ContainerStarted","Data":"79fd3c0788386dd9f0b11a8a5afdce9e82fdf95c32bcc886092dc5eec7a00a0d"} Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.601162 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" event={"ID":"e4d02c79-2b95-4c7a-ae75-f366d40fe558","Type":"ContainerStarted","Data":"95e2c396f0921deeceeb6d73d792abd9827b8b1dc239c2d40419276526654e59"} Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.608111 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.610531 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hjmb9"] Feb 19 05:27:25 crc kubenswrapper[5012]: W0219 05:27:25.619006 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4888722d_d5dd_4748_ac7b_a1d11ba08e6e.slice/crio-dceabac4fd3c41899884d1330da27bd5b20c6eac03e5235cf07da17192b3dc26 WatchSource:0}: Error finding container dceabac4fd3c41899884d1330da27bd5b20c6eac03e5235cf07da17192b3dc26: Status 404 returned error can't find the container with id dceabac4fd3c41899884d1330da27bd5b20c6eac03e5235cf07da17192b3dc26 Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.627760 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.647555 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.651669 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6qvzq"] Feb 19 05:27:25 crc kubenswrapper[5012]: W0219 05:27:25.661338 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c537eae_5a27_4a4d_ba9e_0fd7efe72f37.slice/crio-0d7741dd8935ca80837ae4f1d3e7c159a96896d5f1a49cea2f52d5089d348d51 WatchSource:0}: Error finding container 0d7741dd8935ca80837ae4f1d3e7c159a96896d5f1a49cea2f52d5089d348d51: Status 404 returned error can't find the container with id 0d7741dd8935ca80837ae4f1d3e7c159a96896d5f1a49cea2f52d5089d348d51 Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.667004 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.688008 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.693677 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ntrlp"] Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.695221 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk"] Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.708064 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 05:27:25 crc kubenswrapper[5012]: W0219 05:27:25.708096 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod462e6b9c_5e51_439d_aee8_9e7651b8c35a.slice/crio-0a9f48a203a0a3a4a70e04f238a6e34e7595f66b91ff57dc3097a43f3ff6ddfa WatchSource:0}: Error finding container 0a9f48a203a0a3a4a70e04f238a6e34e7595f66b91ff57dc3097a43f3ff6ddfa: Status 404 returned error can't find the container with id 0a9f48a203a0a3a4a70e04f238a6e34e7595f66b91ff57dc3097a43f3ff6ddfa Feb 19 05:27:25 crc kubenswrapper[5012]: W0219 05:27:25.710562 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e9dd710_d0ec_443f_a081_b18c4b6abe36.slice/crio-1ee3dd9b34ee54e0754750a439b4590af9a0a688e92512f756cbea34daf382ca WatchSource:0}: Error finding container 1ee3dd9b34ee54e0754750a439b4590af9a0a688e92512f756cbea34daf382ca: Status 404 returned error can't find the container with id 1ee3dd9b34ee54e0754750a439b4590af9a0a688e92512f756cbea34daf382ca Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.733211 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.747823 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.754169 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2"] Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.758248 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb"] Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.768421 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.788869 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.807652 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: W0219 05:27:25.826269 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89f1d0f3_c220_4668_b822_3b20b64ebfb8.slice/crio-5003562696efaf86d8b690a85cdcf58c161a34b94a16cc2ce64a20964ec94127 WatchSource:0}: Error finding container 5003562696efaf86d8b690a85cdcf58c161a34b94a16cc2ce64a20964ec94127: Status 404 returned error can't find the container with id 5003562696efaf86d8b690a85cdcf58c161a34b94a16cc2ce64a20964ec94127 Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.827224 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 05:27:25 crc kubenswrapper[5012]: W0219 05:27:25.828242 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf251e39_e77d_4cf8_a359_02645dc98b38.slice/crio-16428c123ec8b757275908431734a8ff065b22d9c78cd4eb6cac9268a1b80501 WatchSource:0}: Error finding container 16428c123ec8b757275908431734a8ff065b22d9c78cd4eb6cac9268a1b80501: Status 404 returned error can't find the container with id 16428c123ec8b757275908431734a8ff065b22d9c78cd4eb6cac9268a1b80501 Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.847424 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.868209 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.887085 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.907712 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.928068 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.948786 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.968181 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.988472 5012 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.008489 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.027910 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.048943 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.068054 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.087741 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.105723 5012 request.go:700] Waited for 1.825597554s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.151574 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzxsb\" (UniqueName: \"kubernetes.io/projected/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-kube-api-access-dzxsb\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.159321 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1387b34e-3233-49a1-9e37-ef1e7f4fb660-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.169528 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktsw8\" (UniqueName: \"kubernetes.io/projected/78dedde0-cb75-4ee7-8735-e6f071a02b10-kube-api-access-ktsw8\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.206861 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-trusted-ca\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208471 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208524 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20a18862-6cbd-4fb1-9d69-ae768e0afddd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208643 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-975f8\" (UniqueName: \"kubernetes.io/projected/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-kube-api-access-975f8\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208682 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6383e6d2-7e9e-4927-a55a-f574e48d316d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208793 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crvb2\" (UniqueName: \"kubernetes.io/projected/af89e320-2661-4860-8079-0c1ff810d97a-kube-api-access-crvb2\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208836 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/af89e320-2661-4860-8079-0c1ff810d97a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208893 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-config\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208954 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-default-certificate\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208998 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2ef24f0-0d7d-4d25-a839-b650893a8332-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209036 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20a18862-6cbd-4fb1-9d69-ae768e0afddd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209078 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d69b3-8e9f-4413-93c1-3c1f77388221-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mt2l6\" (UID: \"e69d69b3-8e9f-4413-93c1-3c1f77388221\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209118 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzhhf\" (UniqueName: \"kubernetes.io/projected/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-kube-api-access-lzhhf\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209193 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209235 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btf46\" (UniqueName: \"kubernetes.io/projected/c2ef24f0-0d7d-4d25-a839-b650893a8332-kube-api-access-btf46\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209264 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-webhook-cert\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209287 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af89e320-2661-4860-8079-0c1ff810d97a-metrics-tls\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209354 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-trusted-ca\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209378 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-dir\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209401 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209428 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209450 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/746299dc-637f-42a3-ad0d-0de202bae64e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j8fg8\" (UID: \"746299dc-637f-42a3-ad0d-0de202bae64e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209465 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw9hx\" (UniqueName: \"kubernetes.io/projected/726872cb-1000-4656-beea-2bd59752199c-kube-api-access-dw9hx\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209506 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6383e6d2-7e9e-4927-a55a-f574e48d316d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209552 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209574 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-serving-cert\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209600 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-bound-sa-token\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209624 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209656 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d9907b5-e862-4242-b233-ed39e5de515a-serving-cert\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209733 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-serving-cert\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209776 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-certificates\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209798 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-service-ca-bundle\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209822 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2ef24f0-0d7d-4d25-a839-b650893a8332-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209847 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/70e7a5c6-0abf-4c78-8087-958a19264b49-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.209887 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:26.709874512 +0000 UTC m=+142.743197081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209915 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/70e7a5c6-0abf-4c78-8087-958a19264b49-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209934 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmlnj\" (UniqueName: \"kubernetes.io/projected/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-kube-api-access-pmlnj\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209974 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gdlm\" (UniqueName: \"kubernetes.io/projected/a3d6e827-2fd3-4026-8bbb-b6336cf7c020-kube-api-access-2gdlm\") pod \"dns-operator-744455d44c-5lz5f\" (UID: \"a3d6e827-2fd3-4026-8bbb-b6336cf7c020\") " pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209994 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/053058a2-c542-41f4-b393-1be45501cfa9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210025 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210046 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210073 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3d6e827-2fd3-4026-8bbb-b6336cf7c020-metrics-tls\") pod \"dns-operator-744455d44c-5lz5f\" (UID: \"a3d6e827-2fd3-4026-8bbb-b6336cf7c020\") " pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210089 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-apiservice-cert\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210104 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jq8v\" (UniqueName: \"kubernetes.io/projected/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-kube-api-access-6jq8v\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210135 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/053058a2-c542-41f4-b393-1be45501cfa9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210161 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20a18862-6cbd-4fb1-9d69-ae768e0afddd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210220 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210239 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-tls\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210262 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-tmpfs\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210286 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6383e6d2-7e9e-4927-a55a-f574e48d316d-config\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210319 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-metrics-certs\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210338 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210399 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw2l6\" (UniqueName: \"kubernetes.io/projected/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-kube-api-access-lw2l6\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210422 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdsc5\" (UniqueName: \"kubernetes.io/projected/746299dc-637f-42a3-ad0d-0de202bae64e-kube-api-access-cdsc5\") pod \"cluster-samples-operator-665b6dd947-j8fg8\" (UID: \"746299dc-637f-42a3-ad0d-0de202bae64e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210442 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-policies\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.211278 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg8nf\" (UniqueName: \"kubernetes.io/projected/e69d69b3-8e9f-4413-93c1-3c1f77388221-kube-api-access-tg8nf\") pod \"package-server-manager-789f6589d5-mt2l6\" (UID: \"e69d69b3-8e9f-4413-93c1-3c1f77388221\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.211461 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2ef24f0-0d7d-4d25-a839-b650893a8332-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.212492 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-stats-auth\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.212549 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmhxd\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-kube-api-access-pmhxd\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.212709 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.212984 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx6d8\" (UniqueName: \"kubernetes.io/projected/c4edd2db-a884-46ac-9a12-0cd2a5daaeb5-kube-api-access-dx6d8\") pod \"downloads-7954f5f757-tjxj6\" (UID: \"c4edd2db-a884-46ac-9a12-0cd2a5daaeb5\") " pod="openshift-console/downloads-7954f5f757-tjxj6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.213032 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.213171 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af89e320-2661-4860-8079-0c1ff810d97a-trusted-ca\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.213205 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d9907b5-e862-4242-b233-ed39e5de515a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.213266 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft4kq\" (UniqueName: \"kubernetes.io/projected/9d9907b5-e862-4242-b233-ed39e5de515a-kube-api-access-ft4kq\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.213467 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-config\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.213515 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvh75\" (UniqueName: \"kubernetes.io/projected/053058a2-c542-41f4-b393-1be45501cfa9-kube-api-access-rvh75\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.252698 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315068 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.315575 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:26.815521315 +0000 UTC m=+142.848843904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315608 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6383e6d2-7e9e-4927-a55a-f574e48d316d-config\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315645 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5kdx\" (UniqueName: \"kubernetes.io/projected/8a05e6ff-179f-4a04-9fc2-524e31980467-kube-api-access-c5kdx\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315670 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46582f7f-c6b0-4ae3-9103-4a4754304438-secret-volume\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315695 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-service-ca\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315718 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qltv6\" (UniqueName: \"kubernetes.io/projected/bdad60bd-8af5-439a-a62e-edf676281c47-kube-api-access-qltv6\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315742 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-metrics-certs\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315766 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315795 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-registration-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315822 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bdad60bd-8af5-439a-a62e-edf676281c47-signing-cabundle\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315873 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw2l6\" (UniqueName: \"kubernetes.io/projected/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-kube-api-access-lw2l6\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315896 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g82wz\" (UniqueName: \"kubernetes.io/projected/f5a5c8b4-57c3-43fc-a404-2754d0e70c50-kube-api-access-g82wz\") pod \"ingress-canary-7q9qf\" (UID: \"f5a5c8b4-57c3-43fc-a404-2754d0e70c50\") " pod="openshift-ingress-canary/ingress-canary-7q9qf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315939 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdsc5\" (UniqueName: \"kubernetes.io/projected/746299dc-637f-42a3-ad0d-0de202bae64e-kube-api-access-cdsc5\") pod \"cluster-samples-operator-665b6dd947-j8fg8\" (UID: \"746299dc-637f-42a3-ad0d-0de202bae64e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315960 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-policies\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315985 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg8nf\" (UniqueName: \"kubernetes.io/projected/e69d69b3-8e9f-4413-93c1-3c1f77388221-kube-api-access-tg8nf\") pod \"package-server-manager-789f6589d5-mt2l6\" (UID: \"e69d69b3-8e9f-4413-93c1-3c1f77388221\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316032 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2ef24f0-0d7d-4d25-a839-b650893a8332-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316058 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a05e6ff-179f-4a04-9fc2-524e31980467-config-volume\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316082 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-stats-auth\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316106 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2876c7dd-5979-49eb-ab61-8ffce07376b2-srv-cert\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316130 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmhxd\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-kube-api-access-pmhxd\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316158 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316187 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73661022-0008-4452-b140-f0a75e4c40c7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316210 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx6d8\" (UniqueName: \"kubernetes.io/projected/c4edd2db-a884-46ac-9a12-0cd2a5daaeb5-kube-api-access-dx6d8\") pod \"downloads-7954f5f757-tjxj6\" (UID: \"c4edd2db-a884-46ac-9a12-0cd2a5daaeb5\") " pod="openshift-console/downloads-7954f5f757-tjxj6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316236 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316288 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fdhm\" (UniqueName: \"kubernetes.io/projected/4087f246-2160-469e-8ad1-d88c147ff7c0-kube-api-access-9fdhm\") pod \"multus-admission-controller-857f4d67dd-xfb4j\" (UID: \"4087f246-2160-469e-8ad1-d88c147ff7c0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316355 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/056df788-349b-4549-88ab-66bbc2ff6afb-node-bootstrap-token\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316378 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f848507-d616-4d06-885f-d84210d9b4a0-serving-cert\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316411 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af89e320-2661-4860-8079-0c1ff810d97a-trusted-ca\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316448 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1fe71123-0d33-41fa-b582-02d70177d0f0-srv-cert\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316479 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d9907b5-e862-4242-b233-ed39e5de515a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316506 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft4kq\" (UniqueName: \"kubernetes.io/projected/9d9907b5-e862-4242-b233-ed39e5de515a-kube-api-access-ft4kq\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316529 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-config\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316554 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvh75\" (UniqueName: \"kubernetes.io/projected/053058a2-c542-41f4-b393-1be45501cfa9-kube-api-access-rvh75\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316583 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx5fm\" (UniqueName: \"kubernetes.io/projected/ab107439-3fd5-41e7-9d30-71962fc96028-kube-api-access-wx5fm\") pod \"migrator-59844c95c7-sppcx\" (UID: \"ab107439-3fd5-41e7-9d30-71962fc96028\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316605 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-plugins-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316627 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4087f246-2160-469e-8ad1-d88c147ff7c0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xfb4j\" (UID: \"4087f246-2160-469e-8ad1-d88c147ff7c0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316652 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-trusted-ca\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316674 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316697 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-975f8\" (UniqueName: \"kubernetes.io/projected/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-kube-api-access-975f8\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316722 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6383e6d2-7e9e-4927-a55a-f574e48d316d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316744 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20a18862-6cbd-4fb1-9d69-ae768e0afddd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316771 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v76p\" (UniqueName: \"kubernetes.io/projected/562c18aa-5aed-4f1e-95f5-da1fe7c02523-kube-api-access-4v76p\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316794 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crvb2\" (UniqueName: \"kubernetes.io/projected/af89e320-2661-4860-8079-0c1ff810d97a-kube-api-access-crvb2\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316831 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1fe71123-0d33-41fa-b582-02d70177d0f0-profile-collector-cert\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316864 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/af89e320-2661-4860-8079-0c1ff810d97a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316901 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz9rv\" (UniqueName: \"kubernetes.io/projected/59cc3a77-bf98-42ed-98d8-a921b7039c6f-kube-api-access-cz9rv\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316933 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-ca\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316981 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c24k9\" (UniqueName: \"kubernetes.io/projected/46582f7f-c6b0-4ae3-9103-4a4754304438-kube-api-access-c24k9\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317006 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-config\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317041 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-default-certificate\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317065 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20a18862-6cbd-4fb1-9d69-ae768e0afddd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317090 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bn2r\" (UniqueName: \"kubernetes.io/projected/056df788-349b-4549-88ab-66bbc2ff6afb-kube-api-access-6bn2r\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317127 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2ef24f0-0d7d-4d25-a839-b650893a8332-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317168 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d69b3-8e9f-4413-93c1-3c1f77388221-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mt2l6\" (UID: \"e69d69b3-8e9f-4413-93c1-3c1f77388221\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317194 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzhhf\" (UniqueName: \"kubernetes.io/projected/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-kube-api-access-lzhhf\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317218 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-csi-data-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317243 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwl8b\" (UniqueName: \"kubernetes.io/projected/1fe71123-0d33-41fa-b582-02d70177d0f0-kube-api-access-zwl8b\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317269 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btf46\" (UniqueName: \"kubernetes.io/projected/c2ef24f0-0d7d-4d25-a839-b650893a8332-kube-api-access-btf46\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317295 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317343 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af89e320-2661-4860-8079-0c1ff810d97a-metrics-tls\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317383 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-webhook-cert\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317418 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-trusted-ca\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317453 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-dir\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317476 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317500 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-client\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317524 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317551 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-mountpoint-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317573 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73661022-0008-4452-b140-f0a75e4c40c7-images\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317599 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/746299dc-637f-42a3-ad0d-0de202bae64e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j8fg8\" (UID: \"746299dc-637f-42a3-ad0d-0de202bae64e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317622 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6383e6d2-7e9e-4927-a55a-f574e48d316d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317648 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-config\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317677 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317703 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-serving-cert\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317726 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bdad60bd-8af5-439a-a62e-edf676281c47-signing-key\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317753 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcl96\" (UniqueName: \"kubernetes.io/projected/4f848507-d616-4d06-885f-d84210d9b4a0-kube-api-access-xcl96\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317777 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-bound-sa-token\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317801 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317825 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d9907b5-e862-4242-b233-ed39e5de515a-serving-cert\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317843 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317852 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46582f7f-c6b0-4ae3-9103-4a4754304438-config-volume\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.318058 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-dir\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317647 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-policies\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317616 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6383e6d2-7e9e-4927-a55a-f574e48d316d-config\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.319182 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-config\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.319189 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:26.819163794 +0000 UTC m=+142.852486373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.320172 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-serving-cert\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.320237 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nwdd\" (UniqueName: \"kubernetes.io/projected/9102ddf1-e140-48e7-9ecd-14a4c007f5d5-kube-api-access-5nwdd\") pod \"control-plane-machine-set-operator-78cbb6b69f-mbxqf\" (UID: \"9102ddf1-e140-48e7-9ecd-14a4c007f5d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.320259 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45jfl\" (UniqueName: \"kubernetes.io/projected/73661022-0008-4452-b140-f0a75e4c40c7-kube-api-access-45jfl\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.322524 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.323117 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d9907b5-e862-4242-b233-ed39e5de515a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.323617 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d69b3-8e9f-4413-93c1-3c1f77388221-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mt2l6\" (UID: \"e69d69b3-8e9f-4413-93c1-3c1f77388221\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.323699 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/746299dc-637f-42a3-ad0d-0de202bae64e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j8fg8\" (UID: \"746299dc-637f-42a3-ad0d-0de202bae64e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.323749 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-stats-auth\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.324384 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-serving-cert\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.324560 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-serving-cert\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.324651 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-service-ca-bundle\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.324682 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2ef24f0-0d7d-4d25-a839-b650893a8332-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.324724 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-certificates\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.325810 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-service-ca-bundle\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.325859 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/70e7a5c6-0abf-4c78-8087-958a19264b49-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.325909 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/70e7a5c6-0abf-4c78-8087-958a19264b49-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326023 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-certificates\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326037 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmlnj\" (UniqueName: \"kubernetes.io/projected/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-kube-api-access-pmlnj\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326075 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a05e6ff-179f-4a04-9fc2-524e31980467-metrics-tls\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326405 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2ef24f0-0d7d-4d25-a839-b650893a8332-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326565 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20a18862-6cbd-4fb1-9d69-ae768e0afddd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326585 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/70e7a5c6-0abf-4c78-8087-958a19264b49-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326759 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzf97\" (UniqueName: \"kubernetes.io/projected/2876c7dd-5979-49eb-ab61-8ffce07376b2-kube-api-access-kzf97\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326819 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gdlm\" (UniqueName: \"kubernetes.io/projected/a3d6e827-2fd3-4026-8bbb-b6336cf7c020-kube-api-access-2gdlm\") pod \"dns-operator-744455d44c-5lz5f\" (UID: \"a3d6e827-2fd3-4026-8bbb-b6336cf7c020\") " pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326853 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-config\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326865 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/053058a2-c542-41f4-b393-1be45501cfa9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326979 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.327654 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af89e320-2661-4860-8079-0c1ff810d97a-metrics-tls\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.328223 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.328407 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5a5c8b4-57c3-43fc-a404-2754d0e70c50-cert\") pod \"ingress-canary-7q9qf\" (UID: \"f5a5c8b4-57c3-43fc-a404-2754d0e70c50\") " pod="openshift-ingress-canary/ingress-canary-7q9qf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.328598 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.328787 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3d6e827-2fd3-4026-8bbb-b6336cf7c020-metrics-tls\") pod \"dns-operator-744455d44c-5lz5f\" (UID: \"a3d6e827-2fd3-4026-8bbb-b6336cf7c020\") " pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.328954 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/053058a2-c542-41f4-b393-1be45501cfa9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.329153 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20a18862-6cbd-4fb1-9d69-ae768e0afddd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.329232 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/056df788-349b-4549-88ab-66bbc2ff6afb-certs\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.330078 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-apiservice-cert\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.330279 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jq8v\" (UniqueName: \"kubernetes.io/projected/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-kube-api-access-6jq8v\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.330608 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73661022-0008-4452-b140-f0a75e4c40c7-proxy-tls\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.330720 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.330748 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-socket-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.330965 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.331067 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.331210 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-tls\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.331252 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2876c7dd-5979-49eb-ab61-8ffce07376b2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.331765 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-tmpfs\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.331833 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9102ddf1-e140-48e7-9ecd-14a4c007f5d5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mbxqf\" (UID: \"9102ddf1-e140-48e7-9ecd-14a4c007f5d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.331942 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d9907b5-e862-4242-b233-ed39e5de515a-serving-cert\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.332505 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-trusted-ca\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.332995 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2ef24f0-0d7d-4d25-a839-b650893a8332-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.331507 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-trusted-ca\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.334604 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.337034 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/70e7a5c6-0abf-4c78-8087-958a19264b49-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.337534 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af89e320-2661-4860-8079-0c1ff810d97a-trusted-ca\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.338039 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.338168 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-tls\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.338889 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-default-certificate\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.338999 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6383e6d2-7e9e-4927-a55a-f574e48d316d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.339450 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-metrics-certs\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.339791 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.340339 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.341585 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.341649 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/053058a2-c542-41f4-b393-1be45501cfa9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.341657 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.342524 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/053058a2-c542-41f4-b393-1be45501cfa9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.343672 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.343813 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-tmpfs\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.343887 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-webhook-cert\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.345141 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.345220 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.346792 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3d6e827-2fd3-4026-8bbb-b6336cf7c020-metrics-tls\") pod \"dns-operator-744455d44c-5lz5f\" (UID: \"a3d6e827-2fd3-4026-8bbb-b6336cf7c020\") " pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.347422 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-apiservice-cert\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.348880 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20a18862-6cbd-4fb1-9d69-ae768e0afddd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.349401 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmhxd\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-kube-api-access-pmhxd\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.358983 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.365206 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.367783 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdsc5\" (UniqueName: \"kubernetes.io/projected/746299dc-637f-42a3-ad0d-0de202bae64e-kube-api-access-cdsc5\") pod \"cluster-samples-operator-665b6dd947-j8fg8\" (UID: \"746299dc-637f-42a3-ad0d-0de202bae64e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.388526 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg8nf\" (UniqueName: \"kubernetes.io/projected/e69d69b3-8e9f-4413-93c1-3c1f77388221-kube-api-access-tg8nf\") pod \"package-server-manager-789f6589d5-mt2l6\" (UID: \"e69d69b3-8e9f-4413-93c1-3c1f77388221\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.424922 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzhhf\" (UniqueName: \"kubernetes.io/projected/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-kube-api-access-lzhhf\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434264 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.434431 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:26.934393889 +0000 UTC m=+142.967716458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434537 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73661022-0008-4452-b140-f0a75e4c40c7-proxy-tls\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434584 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434608 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-socket-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434627 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434823 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2876c7dd-5979-49eb-ab61-8ffce07376b2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434858 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9102ddf1-e140-48e7-9ecd-14a4c007f5d5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mbxqf\" (UID: \"9102ddf1-e140-48e7-9ecd-14a4c007f5d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434899 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5kdx\" (UniqueName: \"kubernetes.io/projected/8a05e6ff-179f-4a04-9fc2-524e31980467-kube-api-access-c5kdx\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434919 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46582f7f-c6b0-4ae3-9103-4a4754304438-secret-volume\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434937 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-service-ca\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434951 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qltv6\" (UniqueName: \"kubernetes.io/projected/bdad60bd-8af5-439a-a62e-edf676281c47-kube-api-access-qltv6\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434986 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-registration-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435003 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bdad60bd-8af5-439a-a62e-edf676281c47-signing-cabundle\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435029 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g82wz\" (UniqueName: \"kubernetes.io/projected/f5a5c8b4-57c3-43fc-a404-2754d0e70c50-kube-api-access-g82wz\") pod \"ingress-canary-7q9qf\" (UID: \"f5a5c8b4-57c3-43fc-a404-2754d0e70c50\") " pod="openshift-ingress-canary/ingress-canary-7q9qf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435080 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a05e6ff-179f-4a04-9fc2-524e31980467-config-volume\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435098 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2876c7dd-5979-49eb-ab61-8ffce07376b2-srv-cert\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435133 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73661022-0008-4452-b140-f0a75e4c40c7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435158 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fdhm\" (UniqueName: \"kubernetes.io/projected/4087f246-2160-469e-8ad1-d88c147ff7c0-kube-api-access-9fdhm\") pod \"multus-admission-controller-857f4d67dd-xfb4j\" (UID: \"4087f246-2160-469e-8ad1-d88c147ff7c0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435176 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/056df788-349b-4549-88ab-66bbc2ff6afb-node-bootstrap-token\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435214 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f848507-d616-4d06-885f-d84210d9b4a0-serving-cert\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435233 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1fe71123-0d33-41fa-b582-02d70177d0f0-srv-cert\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435259 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx5fm\" (UniqueName: \"kubernetes.io/projected/ab107439-3fd5-41e7-9d30-71962fc96028-kube-api-access-wx5fm\") pod \"migrator-59844c95c7-sppcx\" (UID: \"ab107439-3fd5-41e7-9d30-71962fc96028\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437000 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-plugins-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437148 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4087f246-2160-469e-8ad1-d88c147ff7c0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xfb4j\" (UID: \"4087f246-2160-469e-8ad1-d88c147ff7c0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437210 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v76p\" (UniqueName: \"kubernetes.io/projected/562c18aa-5aed-4f1e-95f5-da1fe7c02523-kube-api-access-4v76p\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437240 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1fe71123-0d33-41fa-b582-02d70177d0f0-profile-collector-cert\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437265 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz9rv\" (UniqueName: \"kubernetes.io/projected/59cc3a77-bf98-42ed-98d8-a921b7039c6f-kube-api-access-cz9rv\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437332 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-ca\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437351 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c24k9\" (UniqueName: \"kubernetes.io/projected/46582f7f-c6b0-4ae3-9103-4a4754304438-kube-api-access-c24k9\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437370 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bn2r\" (UniqueName: \"kubernetes.io/projected/056df788-349b-4549-88ab-66bbc2ff6afb-kube-api-access-6bn2r\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437409 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-csi-data-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437427 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwl8b\" (UniqueName: \"kubernetes.io/projected/1fe71123-0d33-41fa-b582-02d70177d0f0-kube-api-access-zwl8b\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437547 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-client\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437569 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-mountpoint-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437585 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73661022-0008-4452-b140-f0a75e4c40c7-images\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437627 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-config\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437649 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437668 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bdad60bd-8af5-439a-a62e-edf676281c47-signing-key\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437706 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcl96\" (UniqueName: \"kubernetes.io/projected/4f848507-d616-4d06-885f-d84210d9b4a0-kube-api-access-xcl96\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437733 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46582f7f-c6b0-4ae3-9103-4a4754304438-config-volume\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437758 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nwdd\" (UniqueName: \"kubernetes.io/projected/9102ddf1-e140-48e7-9ecd-14a4c007f5d5-kube-api-access-5nwdd\") pod \"control-plane-machine-set-operator-78cbb6b69f-mbxqf\" (UID: \"9102ddf1-e140-48e7-9ecd-14a4c007f5d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437796 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45jfl\" (UniqueName: \"kubernetes.io/projected/73661022-0008-4452-b140-f0a75e4c40c7-kube-api-access-45jfl\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437832 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a05e6ff-179f-4a04-9fc2-524e31980467-metrics-tls\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437868 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzf97\" (UniqueName: \"kubernetes.io/projected/2876c7dd-5979-49eb-ab61-8ffce07376b2-kube-api-access-kzf97\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437899 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5a5c8b4-57c3-43fc-a404-2754d0e70c50-cert\") pod \"ingress-canary-7q9qf\" (UID: \"f5a5c8b4-57c3-43fc-a404-2754d0e70c50\") " pod="openshift-ingress-canary/ingress-canary-7q9qf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437915 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/056df788-349b-4549-88ab-66bbc2ff6afb-certs\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.438256 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a05e6ff-179f-4a04-9fc2-524e31980467-config-volume\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.438360 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bdad60bd-8af5-439a-a62e-edf676281c47-signing-cabundle\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.438467 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-registration-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.436496 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-socket-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.438715 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.438763 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-plugins-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.438969 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-csi-data-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.439555 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-service-ca\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.439846 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73661022-0008-4452-b140-f0a75e4c40c7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.441780 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46582f7f-c6b0-4ae3-9103-4a4754304438-config-volume\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.441824 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-client\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.442814 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73661022-0008-4452-b140-f0a75e4c40c7-images\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.443177 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2876c7dd-5979-49eb-ab61-8ffce07376b2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.443404 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.443414 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2876c7dd-5979-49eb-ab61-8ffce07376b2-srv-cert\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.443575 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-mountpoint-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.443823 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/056df788-349b-4549-88ab-66bbc2ff6afb-certs\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.444327 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-ca\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.445224 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5a5c8b4-57c3-43fc-a404-2754d0e70c50-cert\") pod \"ingress-canary-7q9qf\" (UID: \"f5a5c8b4-57c3-43fc-a404-2754d0e70c50\") " pod="openshift-ingress-canary/ingress-canary-7q9qf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.445228 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9102ddf1-e140-48e7-9ecd-14a4c007f5d5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mbxqf\" (UID: \"9102ddf1-e140-48e7-9ecd-14a4c007f5d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.445712 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:26.945695729 +0000 UTC m=+142.979018298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.445860 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a05e6ff-179f-4a04-9fc2-524e31980467-metrics-tls\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.446119 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bdad60bd-8af5-439a-a62e-edf676281c47-signing-key\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.446339 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4087f246-2160-469e-8ad1-d88c147ff7c0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xfb4j\" (UID: \"4087f246-2160-469e-8ad1-d88c147ff7c0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.446871 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46582f7f-c6b0-4ae3-9103-4a4754304438-secret-volume\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.447162 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73661022-0008-4452-b140-f0a75e4c40c7-proxy-tls\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.447191 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-config\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.448349 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f848507-d616-4d06-885f-d84210d9b4a0-serving-cert\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.450845 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1fe71123-0d33-41fa-b582-02d70177d0f0-profile-collector-cert\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.451518 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw2l6\" (UniqueName: \"kubernetes.io/projected/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-kube-api-access-lw2l6\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.451764 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1fe71123-0d33-41fa-b582-02d70177d0f0-srv-cert\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.453118 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/056df788-349b-4549-88ab-66bbc2ff6afb-node-bootstrap-token\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.464628 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6383e6d2-7e9e-4927-a55a-f574e48d316d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.472820 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp"] Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.485512 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-bound-sa-token\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.502819 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20a18862-6cbd-4fb1-9d69-ae768e0afddd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.526366 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crvb2\" (UniqueName: \"kubernetes.io/projected/af89e320-2661-4860-8079-0c1ff810d97a-kube-api-access-crvb2\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.544499 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.544851 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvh75\" (UniqueName: \"kubernetes.io/projected/053058a2-c542-41f4-b393-1be45501cfa9-kube-api-access-rvh75\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.544929 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.044916445 +0000 UTC m=+143.078239014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.561332 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mlxbg"] Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.567492 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/af89e320-2661-4860-8079-0c1ff810d97a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: W0219 05:27:26.570161 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ff8f20f_5302_4b7a_826c_5d557c65c0f3.slice/crio-a6f2569260b6928a746b0541013161dd385ea0ab1aad5d9524e6efae3299b362 WatchSource:0}: Error finding container a6f2569260b6928a746b0541013161dd385ea0ab1aad5d9524e6efae3299b362: Status 404 returned error can't find the container with id a6f2569260b6928a746b0541013161dd385ea0ab1aad5d9524e6efae3299b362 Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.582990 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft4kq\" (UniqueName: \"kubernetes.io/projected/9d9907b5-e862-4242-b233-ed39e5de515a-kube-api-access-ft4kq\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.600111 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.602857 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-975f8\" (UniqueName: \"kubernetes.io/projected/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-kube-api-access-975f8\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.604920 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.613277 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.617567 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" event={"ID":"af251e39-e77d-4cf8-a359-02645dc98b38","Type":"ContainerStarted","Data":"04a0479ceca2d7c3d98f48841a72ac8f9cdcfe7a51cd069a0da195c69a50dcc4"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.617615 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" event={"ID":"af251e39-e77d-4cf8-a359-02645dc98b38","Type":"ContainerStarted","Data":"16428c123ec8b757275908431734a8ff065b22d9c78cd4eb6cac9268a1b80501"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.619789 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" event={"ID":"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37","Type":"ContainerStarted","Data":"fdfd96a1742cbc885fb02908713154e8eeafc0be605934b38fea0b959dfb94fa"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.619854 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" event={"ID":"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37","Type":"ContainerStarted","Data":"6bcb27d02c242e50f41dff5d3edfb6d23e7b0ec6741fafd6e98e30f973688d1a"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.619865 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" event={"ID":"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37","Type":"ContainerStarted","Data":"0d7741dd8935ca80837ae4f1d3e7c159a96896d5f1a49cea2f52d5089d348d51"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.620748 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx6d8\" (UniqueName: \"kubernetes.io/projected/c4edd2db-a884-46ac-9a12-0cd2a5daaeb5-kube-api-access-dx6d8\") pod \"downloads-7954f5f757-tjxj6\" (UID: \"c4edd2db-a884-46ac-9a12-0cd2a5daaeb5\") " pod="openshift-console/downloads-7954f5f757-tjxj6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.624242 5012 generic.go:334] "Generic (PLEG): container finished" podID="462e6b9c-5e51-439d-aee8-9e7651b8c35a" containerID="c648ff6aec50cfe6e7d2a4e378014c657c533b03b6123ec965f8259cf201507b" exitCode=0 Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.624294 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" event={"ID":"462e6b9c-5e51-439d-aee8-9e7651b8c35a","Type":"ContainerDied","Data":"c648ff6aec50cfe6e7d2a4e378014c657c533b03b6123ec965f8259cf201507b"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.624363 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" event={"ID":"462e6b9c-5e51-439d-aee8-9e7651b8c35a","Type":"ContainerStarted","Data":"0a9f48a203a0a3a4a70e04f238a6e34e7595f66b91ff57dc3097a43f3ff6ddfa"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.625888 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.626957 5012 generic.go:334] "Generic (PLEG): container finished" podID="4888722d-d5dd-4748-ac7b-a1d11ba08e6e" containerID="8f1c467ab4f27a880f493ba53ce7139248e78218a82a593f1eae696eaccae534" exitCode=0 Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.627060 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" event={"ID":"4888722d-d5dd-4748-ac7b-a1d11ba08e6e","Type":"ContainerDied","Data":"8f1c467ab4f27a880f493ba53ce7139248e78218a82a593f1eae696eaccae534"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.627128 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" event={"ID":"4888722d-d5dd-4748-ac7b-a1d11ba08e6e","Type":"ContainerStarted","Data":"dceabac4fd3c41899884d1330da27bd5b20c6eac03e5235cf07da17192b3dc26"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.631384 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mlxbg" event={"ID":"5ff8f20f-5302-4b7a-826c-5d557c65c0f3","Type":"ContainerStarted","Data":"a6f2569260b6928a746b0541013161dd385ea0ab1aad5d9524e6efae3299b362"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.638395 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" event={"ID":"1387b34e-3233-49a1-9e37-ef1e7f4fb660","Type":"ContainerStarted","Data":"40ecc847082b185c9f5608425745da5dba3e87711df30e0f64bb96d2e7855856"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.642249 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.643584 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" event={"ID":"7e9dd710-d0ec-443f-a081-b18c4b6abe36","Type":"ContainerStarted","Data":"d41d8bd2ca6cc54e0495b26c42ee87c5303f40e928d5ca5c25add9b16457d3a2"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.643649 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.643663 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" event={"ID":"7e9dd710-d0ec-443f-a081-b18c4b6abe36","Type":"ContainerStarted","Data":"1ee3dd9b34ee54e0754750a439b4590af9a0a688e92512f756cbea34daf382ca"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.645522 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2ef24f0-0d7d-4d25-a839-b650893a8332-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.646341 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.646693 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.146679891 +0000 UTC m=+143.180002460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.647121 5012 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ntrlp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.647166 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" podUID="7e9dd710-d0ec-443f-a081-b18c4b6abe36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.649700 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.653651 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" event={"ID":"5bb4ce13-477c-4c8d-89b5-0d6cc099095c","Type":"ContainerStarted","Data":"dc79225825a23c7457dc0cbf7bbf75007f42f24dffaccf93abbdc2e0d2881172"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.665746 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" event={"ID":"89f1d0f3-c220-4668-b822-3b20b64ebfb8","Type":"ContainerStarted","Data":"0ee8e83714534126962abe0549581114f5bc02b2fbc1bd415c2917a0b2e51cc4"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.665790 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" event={"ID":"89f1d0f3-c220-4668-b822-3b20b64ebfb8","Type":"ContainerStarted","Data":"5003562696efaf86d8b690a85cdcf58c161a34b94a16cc2ce64a20964ec94127"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.666068 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.667788 5012 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mn4f2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.667835 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" podUID="89f1d0f3-c220-4668-b822-3b20b64ebfb8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.672536 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.675766 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btf46\" (UniqueName: \"kubernetes.io/projected/c2ef24f0-0d7d-4d25-a839-b650893a8332-kube-api-access-btf46\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.708197 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.708688 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmlnj\" (UniqueName: \"kubernetes.io/projected/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-kube-api-access-pmlnj\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.710785 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.717277 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gdlm\" (UniqueName: \"kubernetes.io/projected/a3d6e827-2fd3-4026-8bbb-b6336cf7c020-kube-api-access-2gdlm\") pod \"dns-operator-744455d44c-5lz5f\" (UID: \"a3d6e827-2fd3-4026-8bbb-b6336cf7c020\") " pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.748884 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.749748 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.249673441 +0000 UTC m=+143.282996010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.752095 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.752686 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.252665283 +0000 UTC m=+143.285987932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.792937 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng"] Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.806731 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7"] Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.829042 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bn2r\" (UniqueName: \"kubernetes.io/projected/056df788-349b-4549-88ab-66bbc2ff6afb-kube-api-access-6bn2r\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.831355 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.839895 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jq8v\" (UniqueName: \"kubernetes.io/projected/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-kube-api-access-6jq8v\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.844361 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5kdx\" (UniqueName: \"kubernetes.io/projected/8a05e6ff-179f-4a04-9fc2-524e31980467-kube-api-access-c5kdx\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.844948 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g82wz\" (UniqueName: \"kubernetes.io/projected/f5a5c8b4-57c3-43fc-a404-2754d0e70c50-kube-api-access-g82wz\") pod \"ingress-canary-7q9qf\" (UID: \"f5a5c8b4-57c3-43fc-a404-2754d0e70c50\") " pod="openshift-ingress-canary/ingress-canary-7q9qf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.844983 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qltv6\" (UniqueName: \"kubernetes.io/projected/bdad60bd-8af5-439a-a62e-edf676281c47-kube-api-access-qltv6\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.848748 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c24k9\" (UniqueName: \"kubernetes.io/projected/46582f7f-c6b0-4ae3-9103-4a4754304438-kube-api-access-c24k9\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: W0219 05:27:26.850627 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc75dab1e_8eb0_42e5_bc33_f0bf1ebb3dd8.slice/crio-3cc18f037d08f528d9486f30b54c03b3bc2a368afd5e2a0f8f65abd0b01d01b2 WatchSource:0}: Error finding container 3cc18f037d08f528d9486f30b54c03b3bc2a368afd5e2a0f8f65abd0b01d01b2: Status 404 returned error can't find the container with id 3cc18f037d08f528d9486f30b54c03b3bc2a368afd5e2a0f8f65abd0b01d01b2 Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.857067 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.857128 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: W0219 05:27:26.857692 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod056df788_349b_4549_88ab_66bbc2ff6afb.slice/crio-a50a30afb10e585dd6a545d1b8c076b53501b2afc629b3e33d2e3eb0b6e3ec66 WatchSource:0}: Error finding container a50a30afb10e585dd6a545d1b8c076b53501b2afc629b3e33d2e3eb0b6e3ec66: Status 404 returned error can't find the container with id a50a30afb10e585dd6a545d1b8c076b53501b2afc629b3e33d2e3eb0b6e3ec66 Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.858920 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.358892582 +0000 UTC m=+143.392215151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.863013 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzf97\" (UniqueName: \"kubernetes.io/projected/2876c7dd-5979-49eb-ab61-8ffce07376b2-kube-api-access-kzf97\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.865332 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.865797 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.36578477 +0000 UTC m=+143.399107339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.869290 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.876355 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.886702 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx5fm\" (UniqueName: \"kubernetes.io/projected/ab107439-3fd5-41e7-9d30-71962fc96028-kube-api-access-wx5fm\") pod \"migrator-59844c95c7-sppcx\" (UID: \"ab107439-3fd5-41e7-9d30-71962fc96028\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.896126 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.907746 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwl8b\" (UniqueName: \"kubernetes.io/projected/1fe71123-0d33-41fa-b582-02d70177d0f0-kube-api-access-zwl8b\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.937587 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tjxj6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.943942 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nwdd\" (UniqueName: \"kubernetes.io/projected/9102ddf1-e140-48e7-9ecd-14a4c007f5d5-kube-api-access-5nwdd\") pod \"control-plane-machine-set-operator-78cbb6b69f-mbxqf\" (UID: \"9102ddf1-e140-48e7-9ecd-14a4c007f5d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.958107 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcl96\" (UniqueName: \"kubernetes.io/projected/4f848507-d616-4d06-885f-d84210d9b4a0-kube-api-access-xcl96\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.964548 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8"] Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.966042 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.966471 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.466455517 +0000 UTC m=+143.499778086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.981116 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.981516 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45jfl\" (UniqueName: \"kubernetes.io/projected/73661022-0008-4452-b140-f0a75e4c40c7-kube-api-access-45jfl\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.993075 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fdhm\" (UniqueName: \"kubernetes.io/projected/4087f246-2160-469e-8ad1-d88c147ff7c0-kube-api-access-9fdhm\") pod \"multus-admission-controller-857f4d67dd-xfb4j\" (UID: \"4087f246-2160-469e-8ad1-d88c147ff7c0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.018914 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.025317 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.030394 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.032604 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v76p\" (UniqueName: \"kubernetes.io/projected/562c18aa-5aed-4f1e-95f5-da1fe7c02523-kube-api-access-4v76p\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.034732 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz9rv\" (UniqueName: \"kubernetes.io/projected/59cc3a77-bf98-42ed-98d8-a921b7039c6f-kube-api-access-cz9rv\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.035746 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.053599 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.056002 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.062789 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.067295 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.067636 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.567623616 +0000 UTC m=+143.600946185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.067977 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.079293 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7q9qf" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.081546 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg"] Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.095936 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.096171 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.101952 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.155705 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.186184 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.186963 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.686937743 +0000 UTC m=+143.720260312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.188270 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.188669 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.68865779 +0000 UTC m=+143.721980359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.269163 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" podStartSLOduration=122.269146754 podStartE2EDuration="2m2.269146754s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:27.268402894 +0000 UTC m=+143.301725463" watchObservedRunningTime="2026-02-19 05:27:27.269146754 +0000 UTC m=+143.302469323" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.289092 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.289550 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.789508701 +0000 UTC m=+143.822831270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.358356 5012 csr.go:261] certificate signing request csr-tdkx2 is approved, waiting to be issued Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.361875 5012 csr.go:257] certificate signing request csr-tdkx2 is issued Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.398548 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.399261 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.899249435 +0000 UTC m=+143.932572004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.499662 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.501655 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.001618658 +0000 UTC m=+144.034941227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.520288 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-twxgh"] Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.546353 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj"] Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.602978 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.603346 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.103329603 +0000 UTC m=+144.136652172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.703417 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.703814 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.203790423 +0000 UTC m=+144.237112992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.703984 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.704234 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.204223565 +0000 UTC m=+144.237546134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.717616 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xphkg" event={"ID":"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8","Type":"ContainerStarted","Data":"3cc18f037d08f528d9486f30b54c03b3bc2a368afd5e2a0f8f65abd0b01d01b2"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.724399 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" event={"ID":"462e6b9c-5e51-439d-aee8-9e7651b8c35a","Type":"ContainerStarted","Data":"0c0f045ad8f20f4d2fa1d08ae6232c30b02c2dba6fbfca9cb8ffdaba769ddb7c"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.730414 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" event={"ID":"726872cb-1000-4656-beea-2bd59752199c","Type":"ContainerStarted","Data":"89a60f47a5cdc0e142d383055eb74d5054314b0d415d54821b99d42ee41fc662"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.734291 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" event={"ID":"af89e320-2661-4860-8079-0c1ff810d97a","Type":"ContainerStarted","Data":"8b47b8c80257fd82dfb429e3a88ae831fb90a10c1e5ddc7437102c4a757ab2fa"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.750458 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" podStartSLOduration=122.75044415 podStartE2EDuration="2m2.75044415s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:27.749728261 +0000 UTC m=+143.783050830" watchObservedRunningTime="2026-02-19 05:27:27.75044415 +0000 UTC m=+143.783766719" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.756419 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-twxgh" event={"ID":"47b7dc89-8538-41f1-b569-a2b6dcbf8f13","Type":"ContainerStarted","Data":"b9cfdf3cd72a843ab182956639ea87e0e4240a6e9a52d11112a66cd54b11b830"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.771628 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" event={"ID":"4888722d-d5dd-4748-ac7b-a1d11ba08e6e","Type":"ContainerStarted","Data":"4166b6fa7d423d9ae3f38af577e38ae7eabc1c871fd1a57bc3c8cb70d637aac8"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.773565 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" event={"ID":"78dedde0-cb75-4ee7-8735-e6f071a02b10","Type":"ContainerStarted","Data":"ec3759604eb72995a560525895feb2bb0e0e487cb41ebd60f3cbb1221dee904b"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.774757 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" event={"ID":"1387b34e-3233-49a1-9e37-ef1e7f4fb660","Type":"ContainerStarted","Data":"fa42c931d54961e0da972dd6d69040379a570a0d580e32912d21ba279f686879"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.777382 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mlxbg" event={"ID":"5ff8f20f-5302-4b7a-826c-5d557c65c0f3","Type":"ContainerStarted","Data":"cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.778689 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-znn5k" event={"ID":"056df788-349b-4549-88ab-66bbc2ff6afb","Type":"ContainerStarted","Data":"a50a30afb10e585dd6a545d1b8c076b53501b2afc629b3e33d2e3eb0b6e3ec66"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.803694 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.805126 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.806641 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.807554 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.307536824 +0000 UTC m=+144.340859383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.846889 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" podStartSLOduration=122.846874551 podStartE2EDuration="2m2.846874551s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:27.797499329 +0000 UTC m=+143.830821898" watchObservedRunningTime="2026-02-19 05:27:27.846874551 +0000 UTC m=+143.880197120" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.918211 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.919542 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.41953116 +0000 UTC m=+144.452853729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.019391 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.024508 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.524469443 +0000 UTC m=+144.557792012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.038631 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.039084 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.539067633 +0000 UTC m=+144.572390202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.139628 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.140011 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.639984976 +0000 UTC m=+144.673307545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.140355 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.140791 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.640771337 +0000 UTC m=+144.674093906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.241840 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.242187 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.742172254 +0000 UTC m=+144.775494823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.267217 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" podStartSLOduration=122.267202859 podStartE2EDuration="2m2.267202859s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:28.266084038 +0000 UTC m=+144.299406607" watchObservedRunningTime="2026-02-19 05:27:28.267202859 +0000 UTC m=+144.300525428" Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.348609 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.349167 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.849124732 +0000 UTC m=+144.882447301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.364917 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-19 05:22:27 +0000 UTC, rotation deadline is 2026-11-04 07:12:59.950814456 +0000 UTC Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.365011 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6193h45m31.585806979s for next certificate rotation Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.454508 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.455199 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.955183486 +0000 UTC m=+144.988506055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.467103 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" podStartSLOduration=122.467087331 podStartE2EDuration="2m2.467087331s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:28.46556984 +0000 UTC m=+144.498892409" watchObservedRunningTime="2026-02-19 05:27:28.467087331 +0000 UTC m=+144.500409900" Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.555923 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.556224 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.056213002 +0000 UTC m=+145.089535561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.660229 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.660716 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.160700672 +0000 UTC m=+145.194023241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.762682 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.763513 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.263501477 +0000 UTC m=+145.296824046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.785010 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" podStartSLOduration=122.784995225 podStartE2EDuration="2m2.784995225s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:28.752689111 +0000 UTC m=+144.786011680" watchObservedRunningTime="2026-02-19 05:27:28.784995225 +0000 UTC m=+144.818317794" Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.796615 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" event={"ID":"4888722d-d5dd-4748-ac7b-a1d11ba08e6e","Type":"ContainerStarted","Data":"549b548692a811da65826548279e6fac63057af4d2030aa48751c4b6e8815a66"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.796650 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" event={"ID":"726872cb-1000-4656-beea-2bd59752199c","Type":"ContainerStarted","Data":"7a99f0faafc66c81105a0f44329bb5ed7a91b8a3f7f5fe2bf0a012a510280b67"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.796663 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" event={"ID":"726872cb-1000-4656-beea-2bd59752199c","Type":"ContainerStarted","Data":"c8ab7df28adc5a4f3aa17eda45aae326defdb2a0480e7720dd4a1ee12b84030c"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.796673 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" event={"ID":"746299dc-637f-42a3-ad0d-0de202bae64e","Type":"ContainerStarted","Data":"6b6fe6b1308e42932c8727b67e3f9826feba304d9b2b38808224f1ecf2123c5f"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.796688 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" event={"ID":"746299dc-637f-42a3-ad0d-0de202bae64e","Type":"ContainerStarted","Data":"aef3f050ec04f629fbd1ef28d8100a641bbc5c54c1fd99199ea8d822d14d4fed"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.796697 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-znn5k" event={"ID":"056df788-349b-4549-88ab-66bbc2ff6afb","Type":"ContainerStarted","Data":"d494240f5e1e2065a546720a470c0c0e0ef27c2a7f601397d3fedb3284413b4f"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.796710 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xphkg" event={"ID":"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8","Type":"ContainerStarted","Data":"4f7f1fb2e067945b8e4ce6e249db27ab0fcb08e71d70e36228c6f4b6b12bfa67"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.801442 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" event={"ID":"78dedde0-cb75-4ee7-8735-e6f071a02b10","Type":"ContainerStarted","Data":"16184c5994c7790a21ed9a5edb83dab3783976ba6a833b0ab5bb8f3684f4c903"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.803622 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" event={"ID":"20a18862-6cbd-4fb1-9d69-ae768e0afddd","Type":"ContainerStarted","Data":"310c06334c510410ec234d0849b19c6d1a48feed1c6926f3e5f3d29738a0ace3"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.803665 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" event={"ID":"20a18862-6cbd-4fb1-9d69-ae768e0afddd","Type":"ContainerStarted","Data":"12f0cb882a65d9ad07cedd67504bd7624b01d9934ca70532e1520bd44010cea1"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.805691 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-twxgh" event={"ID":"47b7dc89-8538-41f1-b569-a2b6dcbf8f13","Type":"ContainerStarted","Data":"15440b59cd23f4ccbdccba4cd40eff97e7d8dc84759b3a02a5b2b1a6f479c41b"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.806372 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.808317 5012 patch_prober.go:28] interesting pod/console-operator-58897d9998-twxgh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.808377 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-twxgh" podUID="47b7dc89-8538-41f1-b569-a2b6dcbf8f13" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.811261 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" event={"ID":"af89e320-2661-4860-8079-0c1ff810d97a","Type":"ContainerStarted","Data":"0dec7d93893fdb279666f064114255cb117415740e10ead52a54afd0bc425909"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.811327 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" event={"ID":"af89e320-2661-4860-8079-0c1ff810d97a","Type":"ContainerStarted","Data":"dbaf694982512e24ab349494907703f0de521dff3496fd8613b1b7de21123d57"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.868896 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.870471 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.370447695 +0000 UTC m=+145.403770264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.880654 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" podStartSLOduration=123.880638834 podStartE2EDuration="2m3.880638834s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:28.88050483 +0000 UTC m=+144.913827399" watchObservedRunningTime="2026-02-19 05:27:28.880638834 +0000 UTC m=+144.913961403" Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.970576 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.973417 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.473402204 +0000 UTC m=+145.506724773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.007904 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" podStartSLOduration=123.007885228 podStartE2EDuration="2m3.007885228s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:28.9805538 +0000 UTC m=+145.013876369" watchObservedRunningTime="2026-02-19 05:27:29.007885228 +0000 UTC m=+145.041207787" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.043755 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" podStartSLOduration=123.04373618 podStartE2EDuration="2m3.04373618s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.015003993 +0000 UTC m=+145.048326562" watchObservedRunningTime="2026-02-19 05:27:29.04373618 +0000 UTC m=+145.077058749" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.072797 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.072900 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.572886228 +0000 UTC m=+145.606208797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.073082 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.073386 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.573378341 +0000 UTC m=+145.606700910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.076705 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" podStartSLOduration=123.076692762 podStartE2EDuration="2m3.076692762s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.044228163 +0000 UTC m=+145.077550732" watchObservedRunningTime="2026-02-19 05:27:29.076692762 +0000 UTC m=+145.110015331" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.078899 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-znn5k" podStartSLOduration=5.078892732 podStartE2EDuration="5.078892732s" podCreationTimestamp="2026-02-19 05:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.077742191 +0000 UTC m=+145.111064760" watchObservedRunningTime="2026-02-19 05:27:29.078892732 +0000 UTC m=+145.112215301" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.103965 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-mlxbg" podStartSLOduration=124.103951138 podStartE2EDuration="2m4.103951138s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.102827307 +0000 UTC m=+145.136149876" watchObservedRunningTime="2026-02-19 05:27:29.103951138 +0000 UTC m=+145.137273707" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.175712 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.176360 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.67633481 +0000 UTC m=+145.709657379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.221200 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" podStartSLOduration=123.221183508 podStartE2EDuration="2m3.221183508s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.185598124 +0000 UTC m=+145.218920693" watchObservedRunningTime="2026-02-19 05:27:29.221183508 +0000 UTC m=+145.254506077" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.221335 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-twxgh" podStartSLOduration=124.221330482 podStartE2EDuration="2m4.221330482s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.216848209 +0000 UTC m=+145.250170778" watchObservedRunningTime="2026-02-19 05:27:29.221330482 +0000 UTC m=+145.254653041" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.272798 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xphkg" podStartSLOduration=123.272783091 podStartE2EDuration="2m3.272783091s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.245605807 +0000 UTC m=+145.278928376" watchObservedRunningTime="2026-02-19 05:27:29.272783091 +0000 UTC m=+145.306105660" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.283150 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.283435 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.783423672 +0000 UTC m=+145.816746241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.293326 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" podStartSLOduration=123.293294982 podStartE2EDuration="2m3.293294982s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.271364882 +0000 UTC m=+145.304687451" watchObservedRunningTime="2026-02-19 05:27:29.293294982 +0000 UTC m=+145.326617551" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.296897 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh"] Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.308024 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" podStartSLOduration=123.308012095 podStartE2EDuration="2m3.308012095s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.306472423 +0000 UTC m=+145.339794992" watchObservedRunningTime="2026-02-19 05:27:29.308012095 +0000 UTC m=+145.341334664" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.344863 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5"] Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.384817 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.385332 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.885286791 +0000 UTC m=+145.918609360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.470867 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw"] Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.489137 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.489730 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.98971071 +0000 UTC m=+146.023033279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.509877 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5lz5f"] Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.529201 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd"] Feb 19 05:27:29 crc kubenswrapper[5012]: W0219 05:27:29.548801 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2ef24f0_0d7d_4d25_a839_b650893a8332.slice/crio-4170f89dcf6400f27a8d30cf11a2759d84fa6b0636d9e001fc3de2c9e351fb8a WatchSource:0}: Error finding container 4170f89dcf6400f27a8d30cf11a2759d84fa6b0636d9e001fc3de2c9e351fb8a: Status 404 returned error can't find the container with id 4170f89dcf6400f27a8d30cf11a2759d84fa6b0636d9e001fc3de2c9e351fb8a Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.555216 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd"] Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.593924 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.594513 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.094497279 +0000 UTC m=+146.127819848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.604132 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6"] Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.652404 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.663084 5012 patch_prober.go:28] interesting pod/router-default-5444994796-xphkg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 05:27:29 crc kubenswrapper[5012]: [-]has-synced failed: reason withheld Feb 19 05:27:29 crc kubenswrapper[5012]: [+]process-running ok Feb 19 05:27:29 crc kubenswrapper[5012]: healthz check failed Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.663140 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xphkg" podUID="c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.701284 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.701672 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.201660303 +0000 UTC m=+146.234982872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.804847 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.805477 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.305461395 +0000 UTC m=+146.338783964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.830219 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tjxj6"] Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.850694 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" event={"ID":"a3d6e827-2fd3-4026-8bbb-b6336cf7c020","Type":"ContainerStarted","Data":"7bd06768a70620d61b4c8bd3cc981fddb220ce5161cbcb2b453a85acd432af62"} Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.861564 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" event={"ID":"e69d69b3-8e9f-4413-93c1-3c1f77388221","Type":"ContainerStarted","Data":"93fb9b589d6289ffd4851f1e88b8f5cfe19b0d2d25f63362c467d31a22adff2e"} Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.873044 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" event={"ID":"9d9907b5-e862-4242-b233-ed39e5de515a","Type":"ContainerStarted","Data":"291503e9c76ce615eb7999a82601dfb35c2e6137b50e2c0b8c3822c0aa06afcc"} Feb 19 05:27:29 crc kubenswrapper[5012]: W0219 05:27:29.873596 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4edd2db_a884_46ac_9a12_0cd2a5daaeb5.slice/crio-f74174bc886f32fbb6835d34a2a8e317e36dd1c54827c3053e9a817ce011c1aa WatchSource:0}: Error finding container f74174bc886f32fbb6835d34a2a8e317e36dd1c54827c3053e9a817ce011c1aa: Status 404 returned error can't find the container with id f74174bc886f32fbb6835d34a2a8e317e36dd1c54827c3053e9a817ce011c1aa Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.891961 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" event={"ID":"6383e6d2-7e9e-4927-a55a-f574e48d316d","Type":"ContainerStarted","Data":"b5ba991ec28c36a1b0da26f02b14fe82ecdacedcdc13253933d0852244f806d5"} Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.894996 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" event={"ID":"c2ef24f0-0d7d-4d25-a839-b650893a8332","Type":"ContainerStarted","Data":"4170f89dcf6400f27a8d30cf11a2759d84fa6b0636d9e001fc3de2c9e351fb8a"} Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.906563 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.906960 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.406944643 +0000 UTC m=+146.440267212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.917673 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" event={"ID":"746299dc-637f-42a3-ad0d-0de202bae64e","Type":"ContainerStarted","Data":"714988db22596c1d65aaf308c4916997186183702fc2ddf6a89dc3763690e18d"} Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.942657 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" event={"ID":"053058a2-c542-41f4-b393-1be45501cfa9","Type":"ContainerStarted","Data":"fa048d2900fde8ad6ca662cd4311116c9cec4123b78efbc7808031f85e351485"} Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.943084 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" event={"ID":"053058a2-c542-41f4-b393-1be45501cfa9","Type":"ContainerStarted","Data":"247fdcb9e7d20812e4c8624cd9b32ab824c3b7117663dfe72f010e8d9a6c1a4e"} Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.969724 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" podStartSLOduration=124.956289364 podStartE2EDuration="2m4.956289364s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.954102925 +0000 UTC m=+145.987425484" watchObservedRunningTime="2026-02-19 05:27:29.956289364 +0000 UTC m=+145.989611933" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.982810 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" event={"ID":"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88","Type":"ContainerStarted","Data":"233d67638e79328975d3351376616c9acd42140bec2ca07eea26d5f35609f2f4"} Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.984069 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" podStartSLOduration=123.984044214 podStartE2EDuration="2m3.984044214s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.984020264 +0000 UTC m=+146.017342833" watchObservedRunningTime="2026-02-19 05:27:29.984044214 +0000 UTC m=+146.017366773" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.006398 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mmvm"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.010099 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.011407 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.511376983 +0000 UTC m=+146.544699552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.033843 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gv2pd"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.042436 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" podStartSLOduration=124.042415683 podStartE2EDuration="2m4.042415683s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:30.033748535 +0000 UTC m=+146.067071104" watchObservedRunningTime="2026-02-19 05:27:30.042415683 +0000 UTC m=+146.075738252" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.043273 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xfb4j"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.089155 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwd8z"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.099182 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.106122 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n8t75"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.112555 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.119970 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.619955225 +0000 UTC m=+146.653277794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.122253 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.129437 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.138350 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7q9qf"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.148922 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm"] Feb 19 05:27:30 crc kubenswrapper[5012]: W0219 05:27:30.150556 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdad60bd_8af5_439a_a62e_edf676281c47.slice/crio-238a1a6613944c23bcbc5511f8351d95f7ddcdcecbc40cdcd70078a275508a6b WatchSource:0}: Error finding container 238a1a6613944c23bcbc5511f8351d95f7ddcdcecbc40cdcd70078a275508a6b: Status 404 returned error can't find the container with id 238a1a6613944c23bcbc5511f8351d95f7ddcdcecbc40cdcd70078a275508a6b Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.204775 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.207275 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.218149 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.218655 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.718635177 +0000 UTC m=+146.751957746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.221660 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.223473 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.265893 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x2l69"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.322030 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.334159 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.834120919 +0000 UTC m=+146.867443488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.371075 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gwx52"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.422761 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.431423 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.431475 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.437675 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.437723 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6"] Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.445504 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.945471728 +0000 UTC m=+146.978794297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.546847 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.547378 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.047359987 +0000 UTC m=+147.080682556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.582495 5012 patch_prober.go:28] interesting pod/apiserver-76f77b778f-hjmb9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]log ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]etcd ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/generic-apiserver-start-informers ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/max-in-flight-filter ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 19 05:27:30 crc kubenswrapper[5012]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 19 05:27:30 crc kubenswrapper[5012]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/project.openshift.io-projectcache ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 19 05:27:30 crc kubenswrapper[5012]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 19 05:27:30 crc kubenswrapper[5012]: livez check failed Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.582549 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" podUID="4888722d-d5dd-4748-ac7b-a1d11ba08e6e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.648171 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.649038 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.149011711 +0000 UTC m=+147.182334280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.649599 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.650368 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.150294326 +0000 UTC m=+147.183616925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.672610 5012 patch_prober.go:28] interesting pod/router-default-5444994796-xphkg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 05:27:30 crc kubenswrapper[5012]: [-]has-synced failed: reason withheld Feb 19 05:27:30 crc kubenswrapper[5012]: [+]process-running ok Feb 19 05:27:30 crc kubenswrapper[5012]: healthz check failed Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.672661 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xphkg" podUID="c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.750772 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.751311 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.251278121 +0000 UTC m=+147.284600690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.852416 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.853117 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.353090838 +0000 UTC m=+147.386413407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.953378 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.953505 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.453476896 +0000 UTC m=+147.486799465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.954075 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.954472 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.454457162 +0000 UTC m=+147.487779731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.985353 5012 patch_prober.go:28] interesting pod/console-operator-58897d9998-twxgh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.985428 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-twxgh" podUID="47b7dc89-8538-41f1-b569-a2b6dcbf8f13" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.005552 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" event={"ID":"6383e6d2-7e9e-4927-a55a-f574e48d316d","Type":"ContainerStarted","Data":"934c363dcd8da69c6385acb864b1ee90cc3cf64cd81262fdf49825772174c8f0"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.008516 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" event={"ID":"a3d6e827-2fd3-4026-8bbb-b6336cf7c020","Type":"ContainerStarted","Data":"dd0768f51b314debb4667760108777dd51c2e98364a09dac6417b1954e7afb69"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.046768 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" podStartSLOduration=125.046738949 podStartE2EDuration="2m5.046738949s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.034419842 +0000 UTC m=+147.067742411" watchObservedRunningTime="2026-02-19 05:27:31.046738949 +0000 UTC m=+147.080061518" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.056989 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.057520 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.557488733 +0000 UTC m=+147.590811302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.075207 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" event={"ID":"e69d69b3-8e9f-4413-93c1-3c1f77388221","Type":"ContainerStarted","Data":"568922a2fc5b3da6777ed652109652603b30e55c13b96602a9bba6fd75817c67"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.106865 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" event={"ID":"bdad60bd-8af5-439a-a62e-edf676281c47","Type":"ContainerStarted","Data":"f72550df5ab68a936ecdc6b080e0f399a05b553de8ce12467f05fe786041c8cc"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.106926 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" event={"ID":"bdad60bd-8af5-439a-a62e-edf676281c47","Type":"ContainerStarted","Data":"238a1a6613944c23bcbc5511f8351d95f7ddcdcecbc40cdcd70078a275508a6b"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.145968 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" event={"ID":"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205","Type":"ContainerStarted","Data":"140000039b488d6a74e6dca0a622136e9f5d95d4316284ef411cd86f7b4b5bdc"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.146010 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" event={"ID":"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205","Type":"ContainerStarted","Data":"55e0d410865219aa80bb254f3000f0c06084b6058c11a4d834d22107a349d1de"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.159525 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.159838 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.659827325 +0000 UTC m=+147.693149894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.165678 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" podStartSLOduration=125.165660605 podStartE2EDuration="2m5.165660605s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.164554545 +0000 UTC m=+147.197877114" watchObservedRunningTime="2026-02-19 05:27:31.165660605 +0000 UTC m=+147.198983174" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.175368 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tjxj6" event={"ID":"c4edd2db-a884-46ac-9a12-0cd2a5daaeb5","Type":"ContainerStarted","Data":"31179a8dde6740a2622f1382e4cfb69846d1c6177a3354d5743bb90e841822f9"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.175411 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tjxj6" event={"ID":"c4edd2db-a884-46ac-9a12-0cd2a5daaeb5","Type":"ContainerStarted","Data":"f74174bc886f32fbb6835d34a2a8e317e36dd1c54827c3053e9a817ce011c1aa"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.176045 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tjxj6" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.186700 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" event={"ID":"73661022-0008-4452-b140-f0a75e4c40c7","Type":"ContainerStarted","Data":"596b1e354506a9e5a1e64ace0f45a0811e985252dae4a138f059023443d63e80"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.188037 5012 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjxj6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.188074 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjxj6" podUID="c4edd2db-a884-46ac-9a12-0cd2a5daaeb5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.198898 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" podStartSLOduration=125.198882765 podStartE2EDuration="2m5.198882765s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.194383271 +0000 UTC m=+147.227705840" watchObservedRunningTime="2026-02-19 05:27:31.198882765 +0000 UTC m=+147.232205334" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.226913 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tjxj6" podStartSLOduration=125.226895012 podStartE2EDuration="2m5.226895012s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.226710627 +0000 UTC m=+147.260033196" watchObservedRunningTime="2026-02-19 05:27:31.226895012 +0000 UTC m=+147.260217571" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.260093 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.261146 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.761127889 +0000 UTC m=+147.794450458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.262942 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" event={"ID":"9102ddf1-e140-48e7-9ecd-14a4c007f5d5","Type":"ContainerStarted","Data":"e41ab33fc5b8652a7a2c955bb031c0953133b4a1470c9124e9653b6dbd68bbc9"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.274331 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n8t75" event={"ID":"59cc3a77-bf98-42ed-98d8-a921b7039c6f","Type":"ContainerStarted","Data":"6bc39bd96b4c359628dad75a9d900061768b52ad0369c3f6ed3a120400ee5c52"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.357638 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" event={"ID":"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88","Type":"ContainerStarted","Data":"91ae4aa2e6aa7b6ec717573f0e3eaf4b00be341eb7b57b7438299cd791cd3906"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.358892 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.361611 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.361955 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.861944979 +0000 UTC m=+147.895267548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.364969 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" event={"ID":"ab107439-3fd5-41e7-9d30-71962fc96028","Type":"ContainerStarted","Data":"da402da67ca4c5fc52c3d598b03036feb962066b3ef43bac03f54bd427d48c4b"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.365034 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" event={"ID":"ab107439-3fd5-41e7-9d30-71962fc96028","Type":"ContainerStarted","Data":"d0f9af79507258c7fb52f8917ea0f2b463469fd198535aa579fda9f6d003c604"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.369222 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7q9qf" event={"ID":"f5a5c8b4-57c3-43fc-a404-2754d0e70c50","Type":"ContainerStarted","Data":"00493f76ed99bc7d84dfd7a4293c4e667388f5f521879ae230494e707881d54b"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.373571 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" event={"ID":"4f848507-d616-4d06-885f-d84210d9b4a0","Type":"ContainerStarted","Data":"4b19c01b7e4246e19fcfb5539ff1f59ef0ecd09dfb7d5005b2f7aee4820bffde"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.378101 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x2l69" event={"ID":"8a05e6ff-179f-4a04-9fc2-524e31980467","Type":"ContainerStarted","Data":"e12e02fdf3553333025d40efc1ffd6c4531ba3e283e1523b43458da16a159c64"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.381762 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" event={"ID":"ce585ab5-2554-4d20-8789-cf5bfa8e45a7","Type":"ContainerStarted","Data":"bdf60105a735686277da3c5b1467ac389878a76a65699e5227c68bdc76452b4e"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.388183 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" event={"ID":"46582f7f-c6b0-4ae3-9103-4a4754304438","Type":"ContainerStarted","Data":"9d6ad88222eb3dc7a89c4d09501dab7f14514064a3aea52304068199c3bce69f"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.390862 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" event={"ID":"562c18aa-5aed-4f1e-95f5-da1fe7c02523","Type":"ContainerStarted","Data":"48aada40317b892d9a223a57a3ac3503ec0ff8bc3ff5df783ac9de195fd3495f"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.390893 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" event={"ID":"562c18aa-5aed-4f1e-95f5-da1fe7c02523","Type":"ContainerStarted","Data":"a4304d16005995731fefdc081d0677adb43c535c36d93bdb10216b67e4aa8631"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.391950 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.400248 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" event={"ID":"1fe71123-0d33-41fa-b582-02d70177d0f0","Type":"ContainerStarted","Data":"2652496af01f16e7d01dd4644a2db7919909ac2c6ee7aabe5a04080706a0bb7d"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.401227 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.401862 5012 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kwd8z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.401897 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" podUID="562c18aa-5aed-4f1e-95f5-da1fe7c02523" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.402979 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-7q9qf" podStartSLOduration=8.402960702 podStartE2EDuration="8.402960702s" podCreationTimestamp="2026-02-19 05:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.398276314 +0000 UTC m=+147.431598883" watchObservedRunningTime="2026-02-19 05:27:31.402960702 +0000 UTC m=+147.436283271" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.403647 5012 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-x52wm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.403669 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" podUID="1fe71123-0d33-41fa-b582-02d70177d0f0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.419190 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" event={"ID":"c2ef24f0-0d7d-4d25-a839-b650893a8332","Type":"ContainerStarted","Data":"9dadbd16fc4257a74b3c758757f2183dec93fc827f2c3a52bc8ea622a33b7e8d"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.437163 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" event={"ID":"4087f246-2160-469e-8ad1-d88c147ff7c0","Type":"ContainerStarted","Data":"3e572b92845b1150adc255bd1a8efbf36b815ce8a7c965027181e1203955ca74"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.437207 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" event={"ID":"4087f246-2160-469e-8ad1-d88c147ff7c0","Type":"ContainerStarted","Data":"24d5a2afa116a58db7c6f6fc860e0f8debb92b44ff9a5a4acb4d222b5b89979a"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.442748 5012 generic.go:334] "Generic (PLEG): container finished" podID="9d9907b5-e862-4242-b233-ed39e5de515a" containerID="12562e089e25fe2983a61aba7d6057742e9f0092f47825604a4c818e9f02b0c9" exitCode=0 Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.442841 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" event={"ID":"9d9907b5-e862-4242-b233-ed39e5de515a","Type":"ContainerDied","Data":"12562e089e25fe2983a61aba7d6057742e9f0092f47825604a4c818e9f02b0c9"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.459337 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" podStartSLOduration=125.459322765 podStartE2EDuration="2m5.459322765s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.436531571 +0000 UTC m=+147.469854150" watchObservedRunningTime="2026-02-19 05:27:31.459322765 +0000 UTC m=+147.492645334" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.464709 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" event={"ID":"2876c7dd-5979-49eb-ab61-8ffce07376b2","Type":"ContainerStarted","Data":"f00efad93e397330dee961499b255aaf0237f272596ef2b7bd8e55ea2bcb1386"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.464754 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.465771 5012 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jv9qx container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.465798 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" podUID="2876c7dd-5979-49eb-ab61-8ffce07376b2" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.468831 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.469510 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.969496444 +0000 UTC m=+148.002819013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.480110 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" podStartSLOduration=125.480089934 podStartE2EDuration="2m5.480089934s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.459660874 +0000 UTC m=+147.492983443" watchObservedRunningTime="2026-02-19 05:27:31.480089934 +0000 UTC m=+147.513412503" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.483488 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.517920 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" podStartSLOduration=125.517873078 podStartE2EDuration="2m5.517873078s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.482320965 +0000 UTC m=+147.515643534" watchObservedRunningTime="2026-02-19 05:27:31.517873078 +0000 UTC m=+147.551195647" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.555013 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" podStartSLOduration=125.554978724 podStartE2EDuration="2m5.554978724s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.542430001 +0000 UTC m=+147.575752560" watchObservedRunningTime="2026-02-19 05:27:31.554978724 +0000 UTC m=+147.588301293" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.601755 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.617832 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.117808684 +0000 UTC m=+148.151131253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.667584 5012 patch_prober.go:28] interesting pod/router-default-5444994796-xphkg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 05:27:31 crc kubenswrapper[5012]: [-]has-synced failed: reason withheld Feb 19 05:27:31 crc kubenswrapper[5012]: [+]process-running ok Feb 19 05:27:31 crc kubenswrapper[5012]: healthz check failed Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.667623 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xphkg" podUID="c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.710631 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.711013 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.210999996 +0000 UTC m=+148.244322565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.816119 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.816820 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.316805553 +0000 UTC m=+148.350128122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.917806 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.918120 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.418106566 +0000 UTC m=+148.451429135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.019186 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.019714 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.519684907 +0000 UTC m=+148.553007476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.121207 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.121435 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.621407192 +0000 UTC m=+148.654729761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.121601 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.121917 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.621904586 +0000 UTC m=+148.655227145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.222768 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.223013 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.722981093 +0000 UTC m=+148.756303662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.223093 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.223437 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.723421796 +0000 UTC m=+148.756744365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.324534 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.325451 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.825436239 +0000 UTC m=+148.858758808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.359614 5012 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-t22fw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": context deadline exceeded" start-of-body= Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.359672 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" podUID="ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": context deadline exceeded" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.426499 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.426902 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.926885146 +0000 UTC m=+148.960207715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.466323 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" event={"ID":"4f848507-d616-4d06-885f-d84210d9b4a0","Type":"ContainerStarted","Data":"a5392e720da0426e7ceef10d0cda2ed58aeb2d4a566e42290efa4b7559b16c98"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.467877 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" event={"ID":"ce585ab5-2554-4d20-8789-cf5bfa8e45a7","Type":"ContainerStarted","Data":"2dcd03507647b2936efc16e245313a460e479c8027de7859ce5d48daf431680d"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.468099 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.469375 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n8t75" event={"ID":"59cc3a77-bf98-42ed-98d8-a921b7039c6f","Type":"ContainerStarted","Data":"f991c6291fe14dcb45ceeb0ac927a3e121130030e26a5474b618370fe6e8d6e7"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.469679 5012 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6mmvm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" start-of-body= Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.469728 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" podUID="ce585ab5-2554-4d20-8789-cf5bfa8e45a7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.470792 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" event={"ID":"1fe71123-0d33-41fa-b582-02d70177d0f0","Type":"ContainerStarted","Data":"f925c3970014c9c772e649ce17e14f8da66967853c70803bbd3a58c2cac82bfe"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.472822 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7q9qf" event={"ID":"f5a5c8b4-57c3-43fc-a404-2754d0e70c50","Type":"ContainerStarted","Data":"1a6cf9619c764cefc3b20be635a3b7a82399b2d323243e4f7a65726c22488bbe"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.474138 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" event={"ID":"2876c7dd-5979-49eb-ab61-8ffce07376b2","Type":"ContainerStarted","Data":"6436bc748624620ac0c420a56474e96725fe68bc22d131f413a9bd0bee35ce28"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.476249 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" event={"ID":"73661022-0008-4452-b140-f0a75e4c40c7","Type":"ContainerStarted","Data":"f89345770d722ee3c2c4b2cd055939acb47f86448dad7a7d606bcb930764d614"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.476283 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" event={"ID":"73661022-0008-4452-b140-f0a75e4c40c7","Type":"ContainerStarted","Data":"f077b5534c78cf0401c76e13b78eb731161a85181d9a3a47fd5c9ae9fbbf9043"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.478246 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" event={"ID":"ab107439-3fd5-41e7-9d30-71962fc96028","Type":"ContainerStarted","Data":"c530ea4815d1d6599f495fc12cad697deeb748084c8735ecc301a973e1d0c08e"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.480056 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x2l69" event={"ID":"8a05e6ff-179f-4a04-9fc2-524e31980467","Type":"ContainerStarted","Data":"dc620a880d94d42fb50afd48bb887571bd10dd03b87bb53c851ffd0920ae97ca"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.480086 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x2l69" event={"ID":"8a05e6ff-179f-4a04-9fc2-524e31980467","Type":"ContainerStarted","Data":"e28687a9a53aa4c5c8e2212e5e4708b683f2f047dc1b968f77ee7fa7fff09c4c"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.480447 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.481636 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" event={"ID":"4087f246-2160-469e-8ad1-d88c147ff7c0","Type":"ContainerStarted","Data":"ef1e6cc79735672a629e72ac636446ea1c3d193a856d25644eeded63d0d801f5"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.483094 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" event={"ID":"a3d6e827-2fd3-4026-8bbb-b6336cf7c020","Type":"ContainerStarted","Data":"11ec2736e461ae7f894ad44d757cd4b1b03a577e15a93de342b21849df8ef89e"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.485060 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" event={"ID":"9d9907b5-e862-4242-b233-ed39e5de515a","Type":"ContainerStarted","Data":"ea1704874446ac427436ac2a83ffb54965d98ad3e0c5a5a91c666ddb9f68fff5"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.485087 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.487550 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" event={"ID":"46582f7f-c6b0-4ae3-9103-4a4754304438","Type":"ContainerStarted","Data":"6ecd18e5cbbb471f815af478d67f7066d4c1bb34788cd0f8db72ff1fe8b502b7"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.488808 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" event={"ID":"9102ddf1-e140-48e7-9ecd-14a4c007f5d5","Type":"ContainerStarted","Data":"cb27ffc8bea8d2f4936d5055096df47700e058e55b5c2982be2365f15b2c4e55"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.490835 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" event={"ID":"e69d69b3-8e9f-4413-93c1-3c1f77388221","Type":"ContainerStarted","Data":"1fda32d91de928b7ab6b4a50e19eb50b8b6a0c562b3d63ffd0717378c3f931b7"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.490862 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.492209 5012 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kwd8z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.492245 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" podUID="562c18aa-5aed-4f1e-95f5-da1fe7c02523" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.492737 5012 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjxj6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.492761 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjxj6" podUID="c4edd2db-a884-46ac-9a12-0cd2a5daaeb5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.497500 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" podStartSLOduration=126.497480319 podStartE2EDuration="2m6.497480319s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.496825111 +0000 UTC m=+148.530147680" watchObservedRunningTime="2026-02-19 05:27:32.497480319 +0000 UTC m=+148.530802888" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.497790 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.508583 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.527464 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.528979 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.028964311 +0000 UTC m=+149.062286880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.549137 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-x2l69" podStartSLOduration=9.549120813 podStartE2EDuration="9.549120813s" podCreationTimestamp="2026-02-19 05:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.541920766 +0000 UTC m=+148.575243325" watchObservedRunningTime="2026-02-19 05:27:32.549120813 +0000 UTC m=+148.582443382" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.554485 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.565187 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" podStartSLOduration=126.565165642 podStartE2EDuration="2m6.565165642s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.564934976 +0000 UTC m=+148.598257535" watchObservedRunningTime="2026-02-19 05:27:32.565165642 +0000 UTC m=+148.598488211" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.583905 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" podStartSLOduration=126.583891325 podStartE2EDuration="2m6.583891325s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.580878912 +0000 UTC m=+148.614201481" watchObservedRunningTime="2026-02-19 05:27:32.583891325 +0000 UTC m=+148.617213894" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.584974 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.605272 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" podStartSLOduration=126.60525549 podStartE2EDuration="2m6.60525549s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.603131352 +0000 UTC m=+148.636453921" watchObservedRunningTime="2026-02-19 05:27:32.60525549 +0000 UTC m=+148.638578059" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.629239 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.630213 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.630805 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.130772268 +0000 UTC m=+149.164094837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.633032 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" podStartSLOduration=127.6330211 podStartE2EDuration="2m7.6330211s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.630701686 +0000 UTC m=+148.664024255" watchObservedRunningTime="2026-02-19 05:27:32.6330211 +0000 UTC m=+148.666343669" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.634117 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.657643 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" podStartSLOduration=126.657624504 podStartE2EDuration="2m6.657624504s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.652235966 +0000 UTC m=+148.685558535" watchObservedRunningTime="2026-02-19 05:27:32.657624504 +0000 UTC m=+148.690947073" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.658282 5012 patch_prober.go:28] interesting pod/router-default-5444994796-xphkg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 05:27:32 crc kubenswrapper[5012]: [-]has-synced failed: reason withheld Feb 19 05:27:32 crc kubenswrapper[5012]: [+]process-running ok Feb 19 05:27:32 crc kubenswrapper[5012]: healthz check failed Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.658404 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xphkg" podUID="c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.701498 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" podStartSLOduration=127.701478804 podStartE2EDuration="2m7.701478804s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.700706133 +0000 UTC m=+148.734028702" watchObservedRunningTime="2026-02-19 05:27:32.701478804 +0000 UTC m=+148.734801373" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.734058 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.734565 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.734607 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.734672 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.234638672 +0000 UTC m=+149.267961241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.734702 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.752108 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.752200 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.767970 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.770904 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" podStartSLOduration=126.770881474 podStartE2EDuration="2m6.770881474s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.768271123 +0000 UTC m=+148.801593692" watchObservedRunningTime="2026-02-19 05:27:32.770881474 +0000 UTC m=+148.804204043" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.836889 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.837221 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.337210831 +0000 UTC m=+149.370533400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.861532 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" podStartSLOduration=126.861509246 podStartE2EDuration="2m6.861509246s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.816943636 +0000 UTC m=+148.850266205" watchObservedRunningTime="2026-02-19 05:27:32.861509246 +0000 UTC m=+148.894831815" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.923175 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" podStartSLOduration=126.923161384 podStartE2EDuration="2m6.923161384s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.859880651 +0000 UTC m=+148.893203220" watchObservedRunningTime="2026-02-19 05:27:32.923161384 +0000 UTC m=+148.956483953" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.937855 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.938237 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.438220836 +0000 UTC m=+149.471543405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.018983 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.035960 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.039368 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.039912 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.53989421 +0000 UTC m=+149.573216769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.045668 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.140226 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.140914 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.640899335 +0000 UTC m=+149.674221904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.242329 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.242618 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.74260484 +0000 UTC m=+149.775927409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.343380 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.343974 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.843950704 +0000 UTC m=+149.877273273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.444982 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.445616 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.945603278 +0000 UTC m=+149.978925847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.511409 5012 generic.go:334] "Generic (PLEG): container finished" podID="46582f7f-c6b0-4ae3-9103-4a4754304438" containerID="6ecd18e5cbbb471f815af478d67f7066d4c1bb34788cd0f8db72ff1fe8b502b7" exitCode=0 Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.511487 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" event={"ID":"46582f7f-c6b0-4ae3-9103-4a4754304438","Type":"ContainerDied","Data":"6ecd18e5cbbb471f815af478d67f7066d4c1bb34788cd0f8db72ff1fe8b502b7"} Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.515524 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n8t75" event={"ID":"59cc3a77-bf98-42ed-98d8-a921b7039c6f","Type":"ContainerStarted","Data":"74486fcf2869c2781e719093ac09c51de06698182717f4dc05c4a67d46096f0a"} Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.520592 5012 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjxj6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.520647 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjxj6" podUID="c4edd2db-a884-46ac-9a12-0cd2a5daaeb5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.546347 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.546635 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:34.046612083 +0000 UTC m=+150.079934652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.546946 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.549160 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:34.049146923 +0000 UTC m=+150.082469492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.617390 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.648647 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.651424 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:34.151395622 +0000 UTC m=+150.184718191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.668746 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.680195 5012 patch_prober.go:28] interesting pod/router-default-5444994796-xphkg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 05:27:33 crc kubenswrapper[5012]: [-]has-synced failed: reason withheld Feb 19 05:27:33 crc kubenswrapper[5012]: [+]process-running ok Feb 19 05:27:33 crc kubenswrapper[5012]: healthz check failed Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.680244 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xphkg" podUID="c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:27:33 crc kubenswrapper[5012]: W0219 05:27:33.719148 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-5fc5833a15ed29eda06a6213ccb4c8d756da271f16af8ca8f0ecd237c9dd4380 WatchSource:0}: Error finding container 5fc5833a15ed29eda06a6213ccb4c8d756da271f16af8ca8f0ecd237c9dd4380: Status 404 returned error can't find the container with id 5fc5833a15ed29eda06a6213ccb4c8d756da271f16af8ca8f0ecd237c9dd4380 Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.737090 5012 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.751546 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.751905 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:34.251892994 +0000 UTC m=+150.285215563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.854546 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.854686 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:34.354663157 +0000 UTC m=+150.387985726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.857487 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.857778 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:34.357767022 +0000 UTC m=+150.391089591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.911798 5012 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T05:27:33.737114199Z","Handler":null,"Name":""} Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.916230 5012 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.916279 5012 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.958493 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:33 crc kubenswrapper[5012]: W0219 05:27:33.971474 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-9cba4c391c4a9d0349e171123ff6552eb4c54cd23b63e702d08938644e9f84c8 WatchSource:0}: Error finding container 9cba4c391c4a9d0349e171123ff6552eb4c54cd23b63e702d08938644e9f84c8: Status 404 returned error can't find the container with id 9cba4c391c4a9d0349e171123ff6552eb4c54cd23b63e702d08938644e9f84c8 Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.982067 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.032378 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4xvs8"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.033590 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.036142 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.044844 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4xvs8"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.060542 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.061076 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-utilities\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.061114 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-catalog-content\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.061145 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plzvr\" (UniqueName: \"kubernetes.io/projected/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-kube-api-access-plzvr\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.065694 5012 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.065740 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.088008 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.161821 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-utilities\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.161979 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-catalog-content\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.162063 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plzvr\" (UniqueName: \"kubernetes.io/projected/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-kube-api-access-plzvr\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.162260 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-utilities\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.162329 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-catalog-content\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.183409 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plzvr\" (UniqueName: \"kubernetes.io/projected/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-kube-api-access-plzvr\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.215417 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xrjxk"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.216290 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.218048 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.224113 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xrjxk"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.263058 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-catalog-content\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.263120 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-utilities\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.263144 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtwg8\" (UniqueName: \"kubernetes.io/projected/7b9a1165-24e0-4062-b805-0f8262822507-kube-api-access-gtwg8\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.356327 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.364201 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-catalog-content\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.364260 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-utilities\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.364288 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtwg8\" (UniqueName: \"kubernetes.io/projected/7b9a1165-24e0-4062-b805-0f8262822507-kube-api-access-gtwg8\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.364707 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-catalog-content\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.364762 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-utilities\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.383872 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.385151 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtwg8\" (UniqueName: \"kubernetes.io/projected/7b9a1165-24e0-4062-b805-0f8262822507-kube-api-access-gtwg8\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.419535 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5q7vk"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.420438 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.435524 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5q7vk"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.467425 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-utilities\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.467491 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-catalog-content\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.467555 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrsrw\" (UniqueName: \"kubernetes.io/projected/6173dc70-80d4-4f9f-9129-898b2dc38692-kube-api-access-nrsrw\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.528860 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.531332 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n8t75" event={"ID":"59cc3a77-bf98-42ed-98d8-a921b7039c6f","Type":"ContainerStarted","Data":"b1368d1f90632c6d4bbd85063daf2f132f53457ed9152451edf650ce584a60fd"} Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.531364 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n8t75" event={"ID":"59cc3a77-bf98-42ed-98d8-a921b7039c6f","Type":"ContainerStarted","Data":"5127df93edafa6ae4d58b651e2b89951c7923dd1043c70e657e4434b71b8ad5c"} Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.536234 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ffad8409b5f4d0899c2106106ac6efea5f1f05b5a28f347c9e9f64b7d2f2fac3"} Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.536278 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5fc5833a15ed29eda06a6213ccb4c8d756da271f16af8ca8f0ecd237c9dd4380"} Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.540589 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d136e23e49a6d5e9dcc1ae868f447b7d3648b735d3c6411e7ab87329a144b6f1"} Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.540632 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9cba4c391c4a9d0349e171123ff6552eb4c54cd23b63e702d08938644e9f84c8"} Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.541188 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.549967 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"45ee95c9692a3773b380da397110c1fb5c682017c21c1f94d9d0e2e49866301d"} Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.550004 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a14063f7e3c2fbcd2e9f3960b6f031e379fef81577c20ed6fa027502cbf975a1"} Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.552066 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-n8t75" podStartSLOduration=11.55205007 podStartE2EDuration="11.55205007s" podCreationTimestamp="2026-02-19 05:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:34.550674042 +0000 UTC m=+150.583996611" watchObservedRunningTime="2026-02-19 05:27:34.55205007 +0000 UTC m=+150.585372639" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.569215 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-utilities\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.569329 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-catalog-content\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.569539 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrsrw\" (UniqueName: \"kubernetes.io/projected/6173dc70-80d4-4f9f-9129-898b2dc38692-kube-api-access-nrsrw\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.572516 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-catalog-content\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.576942 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-utilities\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.602905 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrsrw\" (UniqueName: \"kubernetes.io/projected/6173dc70-80d4-4f9f-9129-898b2dc38692-kube-api-access-nrsrw\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.651375 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-br86z"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.653762 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.658667 5012 patch_prober.go:28] interesting pod/router-default-5444994796-xphkg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 05:27:34 crc kubenswrapper[5012]: [-]has-synced failed: reason withheld Feb 19 05:27:34 crc kubenswrapper[5012]: [+]process-running ok Feb 19 05:27:34 crc kubenswrapper[5012]: healthz check failed Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.658766 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xphkg" podUID="c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.672282 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-br86z"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.672785 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-catalog-content\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.672827 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-utilities\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.672894 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzlc7\" (UniqueName: \"kubernetes.io/projected/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-kube-api-access-tzlc7\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.730537 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.736913 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ljzsp"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.774084 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzlc7\" (UniqueName: \"kubernetes.io/projected/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-kube-api-access-tzlc7\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.774148 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-catalog-content\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.774191 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-utilities\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.774568 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-utilities\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.775022 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-catalog-content\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.778946 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.794395 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4xvs8"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.800909 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzlc7\" (UniqueName: \"kubernetes.io/projected/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-kube-api-access-tzlc7\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: W0219 05:27:34.816638 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7ce4c2b_d3b7_4881_91fe_49f7103f12b9.slice/crio-b8c85544c6a863422777f31be4cc9ef9cf579d3d709dec29ffff9c467cf857f1 WatchSource:0}: Error finding container b8c85544c6a863422777f31be4cc9ef9cf579d3d709dec29ffff9c467cf857f1: Status 404 returned error can't find the container with id b8c85544c6a863422777f31be4cc9ef9cf579d3d709dec29ffff9c467cf857f1 Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.854917 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.861533 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xrjxk"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.875719 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46582f7f-c6b0-4ae3-9103-4a4754304438-secret-volume\") pod \"46582f7f-c6b0-4ae3-9103-4a4754304438\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.875813 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c24k9\" (UniqueName: \"kubernetes.io/projected/46582f7f-c6b0-4ae3-9103-4a4754304438-kube-api-access-c24k9\") pod \"46582f7f-c6b0-4ae3-9103-4a4754304438\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.875881 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46582f7f-c6b0-4ae3-9103-4a4754304438-config-volume\") pod \"46582f7f-c6b0-4ae3-9103-4a4754304438\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.876893 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46582f7f-c6b0-4ae3-9103-4a4754304438-config-volume" (OuterVolumeSpecName: "config-volume") pod "46582f7f-c6b0-4ae3-9103-4a4754304438" (UID: "46582f7f-c6b0-4ae3-9103-4a4754304438"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.882320 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46582f7f-c6b0-4ae3-9103-4a4754304438-kube-api-access-c24k9" (OuterVolumeSpecName: "kube-api-access-c24k9") pod "46582f7f-c6b0-4ae3-9103-4a4754304438" (UID: "46582f7f-c6b0-4ae3-9103-4a4754304438"). InnerVolumeSpecName "kube-api-access-c24k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.891579 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46582f7f-c6b0-4ae3-9103-4a4754304438-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "46582f7f-c6b0-4ae3-9103-4a4754304438" (UID: "46582f7f-c6b0-4ae3-9103-4a4754304438"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:27:34 crc kubenswrapper[5012]: W0219 05:27:34.915172 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b9a1165_24e0_4062_b805_0f8262822507.slice/crio-1dbae8515e388d77b201dd3b6779da7c54d4915cbd633620f81733f1a3b7142f WatchSource:0}: Error finding container 1dbae8515e388d77b201dd3b6779da7c54d4915cbd633620f81733f1a3b7142f: Status 404 returned error can't find the container with id 1dbae8515e388d77b201dd3b6779da7c54d4915cbd633620f81733f1a3b7142f Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.979685 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46582f7f-c6b0-4ae3-9103-4a4754304438-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.980067 5012 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46582f7f-c6b0-4ae3-9103-4a4754304438-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.980077 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c24k9\" (UniqueName: \"kubernetes.io/projected/46582f7f-c6b0-4ae3-9103-4a4754304438-kube-api-access-c24k9\") on node \"crc\" DevicePath \"\"" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.980707 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.040350 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5q7vk"] Feb 19 05:27:35 crc kubenswrapper[5012]: W0219 05:27:35.053633 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6173dc70_80d4_4f9f_9129_898b2dc38692.slice/crio-51fb0c10b65b4e5eeccf825cbe8bef0aec67c350bcfafc478899d702eea9c2e4 WatchSource:0}: Error finding container 51fb0c10b65b4e5eeccf825cbe8bef0aec67c350bcfafc478899d702eea9c2e4: Status 404 returned error can't find the container with id 51fb0c10b65b4e5eeccf825cbe8bef0aec67c350bcfafc478899d702eea9c2e4 Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.080005 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 05:27:35 crc kubenswrapper[5012]: E0219 05:27:35.080211 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46582f7f-c6b0-4ae3-9103-4a4754304438" containerName="collect-profiles" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.080227 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="46582f7f-c6b0-4ae3-9103-4a4754304438" containerName="collect-profiles" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.080354 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="46582f7f-c6b0-4ae3-9103-4a4754304438" containerName="collect-profiles" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.080745 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.084432 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.084614 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.086976 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.184164 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6954f621-15eb-4515-8855-5bf05a7119c5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6954f621-15eb-4515-8855-5bf05a7119c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.184250 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6954f621-15eb-4515-8855-5bf05a7119c5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6954f621-15eb-4515-8855-5bf05a7119c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.232980 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-br86z"] Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.287255 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6954f621-15eb-4515-8855-5bf05a7119c5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6954f621-15eb-4515-8855-5bf05a7119c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.287472 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6954f621-15eb-4515-8855-5bf05a7119c5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6954f621-15eb-4515-8855-5bf05a7119c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.287536 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6954f621-15eb-4515-8855-5bf05a7119c5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6954f621-15eb-4515-8855-5bf05a7119c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.329094 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6954f621-15eb-4515-8855-5bf05a7119c5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6954f621-15eb-4515-8855-5bf05a7119c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.405713 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.411646 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.548560 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.558728 5012 generic.go:334] "Generic (PLEG): container finished" podID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerID="06936ac625543a23fe6a94c680d400a453b3652063590fadf1140acbd164e331" exitCode=0 Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.558948 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xvs8" event={"ID":"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9","Type":"ContainerDied","Data":"06936ac625543a23fe6a94c680d400a453b3652063590fadf1140acbd164e331"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.558995 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xvs8" event={"ID":"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9","Type":"ContainerStarted","Data":"b8c85544c6a863422777f31be4cc9ef9cf579d3d709dec29ffff9c467cf857f1"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.562013 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br86z" event={"ID":"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc","Type":"ContainerStarted","Data":"be080096b804213f30565dd54118337146dcc411c16ff0c8a6962f9fd3f03e3a"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.563881 5012 generic.go:334] "Generic (PLEG): container finished" podID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerID="0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc" exitCode=0 Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.563993 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7vk" event={"ID":"6173dc70-80d4-4f9f-9129-898b2dc38692","Type":"ContainerDied","Data":"0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.564061 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7vk" event={"ID":"6173dc70-80d4-4f9f-9129-898b2dc38692","Type":"ContainerStarted","Data":"51fb0c10b65b4e5eeccf825cbe8bef0aec67c350bcfafc478899d702eea9c2e4"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.567360 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" event={"ID":"46582f7f-c6b0-4ae3-9103-4a4754304438","Type":"ContainerDied","Data":"9d6ad88222eb3dc7a89c4d09501dab7f14514064a3aea52304068199c3bce69f"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.567464 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d6ad88222eb3dc7a89c4d09501dab7f14514064a3aea52304068199c3bce69f" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.567557 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.569894 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.573758 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" event={"ID":"70e7a5c6-0abf-4c78-8087-958a19264b49","Type":"ContainerStarted","Data":"83d6198005201c652f989f86934dfd0087e9ca81b54e4a24ea15985ceb37c2cd"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.573810 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" event={"ID":"70e7a5c6-0abf-4c78-8087-958a19264b49","Type":"ContainerStarted","Data":"b4a4a4ebd6fc7c45c5fc88ca24394f42a5591b27d7679378f83e52a1da7bb083"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.574649 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.585584 5012 generic.go:334] "Generic (PLEG): container finished" podID="7b9a1165-24e0-4062-b805-0f8262822507" containerID="0fe51da344cbaacf6697c74dcff49e7182b9df6468c8ccbfb60f3cd9e38eda3d" exitCode=0 Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.586224 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrjxk" event={"ID":"7b9a1165-24e0-4062-b805-0f8262822507","Type":"ContainerDied","Data":"0fe51da344cbaacf6697c74dcff49e7182b9df6468c8ccbfb60f3cd9e38eda3d"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.586350 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrjxk" event={"ID":"7b9a1165-24e0-4062-b805-0f8262822507","Type":"ContainerStarted","Data":"1dbae8515e388d77b201dd3b6779da7c54d4915cbd633620f81733f1a3b7142f"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.606607 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" podStartSLOduration=129.606585752 podStartE2EDuration="2m9.606585752s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:35.606366086 +0000 UTC m=+151.639688655" watchObservedRunningTime="2026-02-19 05:27:35.606585752 +0000 UTC m=+151.639908321" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.654830 5012 patch_prober.go:28] interesting pod/router-default-5444994796-xphkg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 05:27:35 crc kubenswrapper[5012]: [-]has-synced failed: reason withheld Feb 19 05:27:35 crc kubenswrapper[5012]: [+]process-running ok Feb 19 05:27:35 crc kubenswrapper[5012]: healthz check failed Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.654939 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xphkg" podUID="c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.876367 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.925510 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.973045 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.974481 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.977804 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.979945 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.982608 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.000790 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.000879 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.025001 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-29nf4"] Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.026390 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.028095 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.036602 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-29nf4"] Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.102416 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.102506 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-utilities\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.102534 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.102555 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-catalog-content\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.102580 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz86t\" (UniqueName: \"kubernetes.io/projected/185ea561-a45e-49e1-a46b-f9bf9f6d2527-kube-api-access-fz86t\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.102625 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.121810 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.207965 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-catalog-content\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.208397 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz86t\" (UniqueName: \"kubernetes.io/projected/185ea561-a45e-49e1-a46b-f9bf9f6d2527-kube-api-access-fz86t\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.208518 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-utilities\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.209011 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-utilities\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.209287 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-catalog-content\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.228913 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz86t\" (UniqueName: \"kubernetes.io/projected/185ea561-a45e-49e1-a46b-f9bf9f6d2527-kube-api-access-fz86t\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.291270 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.335183 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.335244 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.338573 5012 patch_prober.go:28] interesting pod/console-f9d7485db-mlxbg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.338622 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-mlxbg" podUID="5ff8f20f-5302-4b7a-826c-5d557c65c0f3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.341892 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.420210 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x576d"] Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.421623 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.423751 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x576d"] Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.516968 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g9fc\" (UniqueName: \"kubernetes.io/projected/2269b2c9-4876-43e3-85ce-9650ffec804f-kube-api-access-4g9fc\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.517427 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-utilities\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.517459 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-catalog-content\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.566121 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 05:27:36 crc kubenswrapper[5012]: W0219 05:27:36.609099 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod13070b31_1da3_4cbd_8281_072d0ab1a3dd.slice/crio-87ab6eb483767ec772ec703bf757d37c8c24a75c4922e31a84e430bedd92a159 WatchSource:0}: Error finding container 87ab6eb483767ec772ec703bf757d37c8c24a75c4922e31a84e430bedd92a159: Status 404 returned error can't find the container with id 87ab6eb483767ec772ec703bf757d37c8c24a75c4922e31a84e430bedd92a159 Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.619997 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-catalog-content\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.620046 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g9fc\" (UniqueName: \"kubernetes.io/projected/2269b2c9-4876-43e3-85ce-9650ffec804f-kube-api-access-4g9fc\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.620105 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-utilities\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.620483 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-utilities\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.620770 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-catalog-content\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.622124 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.642531 5012 generic.go:334] "Generic (PLEG): container finished" podID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerID="e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56" exitCode=0 Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.642616 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br86z" event={"ID":"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc","Type":"ContainerDied","Data":"e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56"} Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.657639 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g9fc\" (UniqueName: \"kubernetes.io/projected/2269b2c9-4876-43e3-85ce-9650ffec804f-kube-api-access-4g9fc\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.649518 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6954f621-15eb-4515-8855-5bf05a7119c5","Type":"ContainerStarted","Data":"8acea0f167402859981dc839cf3505db2ef197f0b780f6627e5c2b682ff17782"} Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.659665 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6954f621-15eb-4515-8855-5bf05a7119c5","Type":"ContainerStarted","Data":"1ccd16d1972d25b1763f5be3a7ec9e3f85e4b8a335594d9e379233169b3aff08"} Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.659698 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.659767 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.664249 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.689699 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.689676306 podStartE2EDuration="1.689676306s" podCreationTimestamp="2026-02-19 05:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:36.687501097 +0000 UTC m=+152.720823666" watchObservedRunningTime="2026-02-19 05:27:36.689676306 +0000 UTC m=+152.722998875" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.776795 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.859811 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-29nf4"] Feb 19 05:27:36 crc kubenswrapper[5012]: W0219 05:27:36.873467 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod185ea561_a45e_49e1_a46b_f9bf9f6d2527.slice/crio-d6248eb1f07ab21d429bccf4d50cb020bfc4631adebda71b1fd6e99e737ec5c4 WatchSource:0}: Error finding container d6248eb1f07ab21d429bccf4d50cb020bfc4631adebda71b1fd6e99e737ec5c4: Status 404 returned error can't find the container with id d6248eb1f07ab21d429bccf4d50cb020bfc4631adebda71b1fd6e99e737ec5c4 Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.938869 5012 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjxj6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.938918 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tjxj6" podUID="c4edd2db-a884-46ac-9a12-0cd2a5daaeb5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.939052 5012 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjxj6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.941384 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjxj6" podUID="c4edd2db-a884-46ac-9a12-0cd2a5daaeb5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.090329 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x576d"] Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.224269 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rprhz"] Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.225263 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.231667 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.291265 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rprhz"] Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.338293 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-utilities\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.338343 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-catalog-content\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.338364 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr49f\" (UniqueName: \"kubernetes.io/projected/e45c788c-c8a0-4563-8d05-71915e390342-kube-api-access-pr49f\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.440890 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-utilities\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.440935 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-catalog-content\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.440957 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr49f\" (UniqueName: \"kubernetes.io/projected/e45c788c-c8a0-4563-8d05-71915e390342-kube-api-access-pr49f\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.442058 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-catalog-content\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.442635 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-utilities\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.467129 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr49f\" (UniqueName: \"kubernetes.io/projected/e45c788c-c8a0-4563-8d05-71915e390342-kube-api-access-pr49f\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.559193 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.625342 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-48wp9"] Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.626284 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.644877 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-utilities\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.644920 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-catalog-content\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.644939 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlfxl\" (UniqueName: \"kubernetes.io/projected/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-kube-api-access-rlfxl\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.672477 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-48wp9"] Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.680776 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"13070b31-1da3-4cbd-8281-072d0ab1a3dd","Type":"ContainerStarted","Data":"c1adebc5f1403256c42763a66b6e381a177799c4b736890ae0e7d77d62d55407"} Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.680833 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"13070b31-1da3-4cbd-8281-072d0ab1a3dd","Type":"ContainerStarted","Data":"87ab6eb483767ec772ec703bf757d37c8c24a75c4922e31a84e430bedd92a159"} Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.690736 5012 generic.go:334] "Generic (PLEG): container finished" podID="6954f621-15eb-4515-8855-5bf05a7119c5" containerID="8acea0f167402859981dc839cf3505db2ef197f0b780f6627e5c2b682ff17782" exitCode=0 Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.690911 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6954f621-15eb-4515-8855-5bf05a7119c5","Type":"ContainerDied","Data":"8acea0f167402859981dc839cf3505db2ef197f0b780f6627e5c2b682ff17782"} Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.710647 5012 generic.go:334] "Generic (PLEG): container finished" podID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerID="dc1a50c23707e41d34121953c7a07c7a6d9a618fec62090df956fa84f7fc89cb" exitCode=0 Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.710830 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29nf4" event={"ID":"185ea561-a45e-49e1-a46b-f9bf9f6d2527","Type":"ContainerDied","Data":"dc1a50c23707e41d34121953c7a07c7a6d9a618fec62090df956fa84f7fc89cb"} Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.710867 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29nf4" event={"ID":"185ea561-a45e-49e1-a46b-f9bf9f6d2527","Type":"ContainerStarted","Data":"d6248eb1f07ab21d429bccf4d50cb020bfc4631adebda71b1fd6e99e737ec5c4"} Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.712452 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.712430678 podStartE2EDuration="2.712430678s" podCreationTimestamp="2026-02-19 05:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:37.708664225 +0000 UTC m=+153.741986794" watchObservedRunningTime="2026-02-19 05:27:37.712430678 +0000 UTC m=+153.745753247" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.719376 5012 generic.go:334] "Generic (PLEG): container finished" podID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerID="425cdf1067d5b62c628d2a89d10d8e953e7e1cf4ee46294d6c0c129fa2655d83" exitCode=0 Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.719578 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x576d" event={"ID":"2269b2c9-4876-43e3-85ce-9650ffec804f","Type":"ContainerDied","Data":"425cdf1067d5b62c628d2a89d10d8e953e7e1cf4ee46294d6c0c129fa2655d83"} Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.719632 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x576d" event={"ID":"2269b2c9-4876-43e3-85ce-9650ffec804f","Type":"ContainerStarted","Data":"e1cb3c13c7905eb67fe5b6fee6bfb21b5e93340cd8fbda0eba5b4e60709ae667"} Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.746543 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-utilities\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.746594 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-catalog-content\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.746643 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlfxl\" (UniqueName: \"kubernetes.io/projected/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-kube-api-access-rlfxl\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.747711 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-utilities\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.747823 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-catalog-content\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.779337 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlfxl\" (UniqueName: \"kubernetes.io/projected/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-kube-api-access-rlfxl\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.980351 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.175385 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rprhz"] Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.347496 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-48wp9"] Feb 19 05:27:38 crc kubenswrapper[5012]: W0219 05:27:38.384772 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b13dfa9_14e1_4ad5_b6c6_f86486a73e9a.slice/crio-d28f8c0cd228cb43c2f0346277beec93d922e7fa5ce5493ec945b62c4230d6ab WatchSource:0}: Error finding container d28f8c0cd228cb43c2f0346277beec93d922e7fa5ce5493ec945b62c4230d6ab: Status 404 returned error can't find the container with id d28f8c0cd228cb43c2f0346277beec93d922e7fa5ce5493ec945b62c4230d6ab Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.732562 5012 generic.go:334] "Generic (PLEG): container finished" podID="e45c788c-c8a0-4563-8d05-71915e390342" containerID="4b13a012dcea4fefc2b4e7757fddd764d86d7e0aa4fa7cfad77d502f2efa1ea0" exitCode=0 Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.732639 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprhz" event={"ID":"e45c788c-c8a0-4563-8d05-71915e390342","Type":"ContainerDied","Data":"4b13a012dcea4fefc2b4e7757fddd764d86d7e0aa4fa7cfad77d502f2efa1ea0"} Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.732662 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprhz" event={"ID":"e45c788c-c8a0-4563-8d05-71915e390342","Type":"ContainerStarted","Data":"330c4277adf991cb8d45015f1cf3ae0cb9906f5605d279d3a2745e3670726677"} Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.735729 5012 generic.go:334] "Generic (PLEG): container finished" podID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerID="beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d" exitCode=0 Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.735799 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48wp9" event={"ID":"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a","Type":"ContainerDied","Data":"beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d"} Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.735835 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48wp9" event={"ID":"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a","Type":"ContainerStarted","Data":"d28f8c0cd228cb43c2f0346277beec93d922e7fa5ce5493ec945b62c4230d6ab"} Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.744641 5012 generic.go:334] "Generic (PLEG): container finished" podID="13070b31-1da3-4cbd-8281-072d0ab1a3dd" containerID="c1adebc5f1403256c42763a66b6e381a177799c4b736890ae0e7d77d62d55407" exitCode=0 Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.744696 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"13070b31-1da3-4cbd-8281-072d0ab1a3dd","Type":"ContainerDied","Data":"c1adebc5f1403256c42763a66b6e381a177799c4b736890ae0e7d77d62d55407"} Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.017600 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.090834 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6954f621-15eb-4515-8855-5bf05a7119c5-kubelet-dir\") pod \"6954f621-15eb-4515-8855-5bf05a7119c5\" (UID: \"6954f621-15eb-4515-8855-5bf05a7119c5\") " Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.091022 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6954f621-15eb-4515-8855-5bf05a7119c5-kube-api-access\") pod \"6954f621-15eb-4515-8855-5bf05a7119c5\" (UID: \"6954f621-15eb-4515-8855-5bf05a7119c5\") " Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.092949 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6954f621-15eb-4515-8855-5bf05a7119c5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6954f621-15eb-4515-8855-5bf05a7119c5" (UID: "6954f621-15eb-4515-8855-5bf05a7119c5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.106737 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6954f621-15eb-4515-8855-5bf05a7119c5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6954f621-15eb-4515-8855-5bf05a7119c5" (UID: "6954f621-15eb-4515-8855-5bf05a7119c5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.192924 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6954f621-15eb-4515-8855-5bf05a7119c5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.192953 5012 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6954f621-15eb-4515-8855-5bf05a7119c5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.766635 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6954f621-15eb-4515-8855-5bf05a7119c5","Type":"ContainerDied","Data":"1ccd16d1972d25b1763f5be3a7ec9e3f85e4b8a335594d9e379233169b3aff08"} Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.766677 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ccd16d1972d25b1763f5be3a7ec9e3f85e4b8a335594d9e379233169b3aff08" Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.766770 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.091771 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.124610 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kubelet-dir\") pod \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\" (UID: \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\") " Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.124675 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kube-api-access\") pod \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\" (UID: \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\") " Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.124741 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "13070b31-1da3-4cbd-8281-072d0ab1a3dd" (UID: "13070b31-1da3-4cbd-8281-072d0ab1a3dd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.124940 5012 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.144420 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "13070b31-1da3-4cbd-8281-072d0ab1a3dd" (UID: "13070b31-1da3-4cbd-8281-072d0ab1a3dd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.226344 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.785920 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"13070b31-1da3-4cbd-8281-072d0ab1a3dd","Type":"ContainerDied","Data":"87ab6eb483767ec772ec703bf757d37c8c24a75c4922e31a84e430bedd92a159"} Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.786218 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87ab6eb483767ec772ec703bf757d37c8c24a75c4922e31a84e430bedd92a159" Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.786190 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:42 crc kubenswrapper[5012]: I0219 05:27:42.106833 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:44 crc kubenswrapper[5012]: I0219 05:27:44.431043 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:27:44 crc kubenswrapper[5012]: I0219 05:27:44.432128 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:27:46 crc kubenswrapper[5012]: I0219 05:27:46.359601 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:46 crc kubenswrapper[5012]: I0219 05:27:46.363680 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:46 crc kubenswrapper[5012]: I0219 05:27:46.953053 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tjxj6" Feb 19 05:27:48 crc kubenswrapper[5012]: I0219 05:27:48.561826 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:48 crc kubenswrapper[5012]: I0219 05:27:48.575037 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:48 crc kubenswrapper[5012]: I0219 05:27:48.628214 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:54 crc kubenswrapper[5012]: I0219 05:27:54.398441 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:28:02 crc kubenswrapper[5012]: E0219 05:28:02.994874 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 05:28:02 crc kubenswrapper[5012]: E0219 05:28:02.995929 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plzvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4xvs8_openshift-marketplace(a7ce4c2b-d3b7-4881-91fe-49f7103f12b9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 05:28:02 crc kubenswrapper[5012]: E0219 05:28:02.997142 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4xvs8" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" Feb 19 05:28:05 crc kubenswrapper[5012]: E0219 05:28:05.025347 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4xvs8" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" Feb 19 05:28:06 crc kubenswrapper[5012]: I0219 05:28:06.682236 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.363869 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.364999 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tzlc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-br86z_openshift-marketplace(e16bf8e1-cd8b-48fc-9726-40c1b397a6bc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.366203 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-br86z" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.425360 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.425535 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pr49f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-rprhz_openshift-marketplace(e45c788c-c8a0-4563-8d05-71915e390342): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.426878 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-rprhz" podUID="e45c788c-c8a0-4563-8d05-71915e390342" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.458636 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.459078 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlfxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-48wp9_openshift-marketplace(7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.460385 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-48wp9" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" Feb 19 05:28:09 crc kubenswrapper[5012]: I0219 05:28:09.823074 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-q5cb2"] Feb 19 05:28:09 crc kubenswrapper[5012]: W0219 05:28:09.890621 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e231950_a365_4a82_9481_05fdac171449.slice/crio-b2e06b54c13f611969946e31516345aec736d4f562ad6bf9bfc68714c955cbbc WatchSource:0}: Error finding container b2e06b54c13f611969946e31516345aec736d4f562ad6bf9bfc68714c955cbbc: Status 404 returned error can't find the container with id b2e06b54c13f611969946e31516345aec736d4f562ad6bf9bfc68714c955cbbc Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.017787 5012 generic.go:334] "Generic (PLEG): container finished" podID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerID="87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4" exitCode=0 Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.017841 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7vk" event={"ID":"6173dc70-80d4-4f9f-9129-898b2dc38692","Type":"ContainerDied","Data":"87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4"} Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.020076 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" event={"ID":"2e231950-a365-4a82-9481-05fdac171449","Type":"ContainerStarted","Data":"b2e06b54c13f611969946e31516345aec736d4f562ad6bf9bfc68714c955cbbc"} Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.024343 5012 generic.go:334] "Generic (PLEG): container finished" podID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerID="b38c4d760b78c9580d7920d8d103f03ae36a4fb22594d35317c5a0fc8161982d" exitCode=0 Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.024419 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29nf4" event={"ID":"185ea561-a45e-49e1-a46b-f9bf9f6d2527","Type":"ContainerDied","Data":"b38c4d760b78c9580d7920d8d103f03ae36a4fb22594d35317c5a0fc8161982d"} Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.027371 5012 generic.go:334] "Generic (PLEG): container finished" podID="7b9a1165-24e0-4062-b805-0f8262822507" containerID="4d96789a875fc9919836ff36dc1d21b427a832c3292532d47b588b770f2a75ed" exitCode=0 Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.027460 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrjxk" event={"ID":"7b9a1165-24e0-4062-b805-0f8262822507","Type":"ContainerDied","Data":"4d96789a875fc9919836ff36dc1d21b427a832c3292532d47b588b770f2a75ed"} Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.032757 5012 generic.go:334] "Generic (PLEG): container finished" podID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerID="0cfb2a088fe11f89edf830fa194013f6aaa648491c76af6a0be0b1ed87f083f2" exitCode=0 Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.032884 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x576d" event={"ID":"2269b2c9-4876-43e3-85ce-9650ffec804f","Type":"ContainerDied","Data":"0cfb2a088fe11f89edf830fa194013f6aaa648491c76af6a0be0b1ed87f083f2"} Feb 19 05:28:10 crc kubenswrapper[5012]: E0219 05:28:10.035718 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-rprhz" podUID="e45c788c-c8a0-4563-8d05-71915e390342" Feb 19 05:28:10 crc kubenswrapper[5012]: E0219 05:28:10.040034 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-48wp9" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" Feb 19 05:28:10 crc kubenswrapper[5012]: E0219 05:28:10.041788 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-br86z" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.973745 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 05:28:10 crc kubenswrapper[5012]: E0219 05:28:10.976029 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13070b31-1da3-4cbd-8281-072d0ab1a3dd" containerName="pruner" Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.976055 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="13070b31-1da3-4cbd-8281-072d0ab1a3dd" containerName="pruner" Feb 19 05:28:10 crc kubenswrapper[5012]: E0219 05:28:10.976080 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6954f621-15eb-4515-8855-5bf05a7119c5" containerName="pruner" Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.976092 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6954f621-15eb-4515-8855-5bf05a7119c5" containerName="pruner" Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.976261 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="13070b31-1da3-4cbd-8281-072d0ab1a3dd" containerName="pruner" Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.976283 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6954f621-15eb-4515-8855-5bf05a7119c5" containerName="pruner" Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.978771 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.984355 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.994490 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.003734 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.042338 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x576d" event={"ID":"2269b2c9-4876-43e3-85ce-9650ffec804f","Type":"ContainerStarted","Data":"1f7b0db88035160e95fb281ad896b78f2deead667d95c19e81640e666f8610f7"} Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.045984 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7vk" event={"ID":"6173dc70-80d4-4f9f-9129-898b2dc38692","Type":"ContainerStarted","Data":"ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf"} Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.051132 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" event={"ID":"2e231950-a365-4a82-9481-05fdac171449","Type":"ContainerStarted","Data":"30e5371ddc17f6259cc33364fae311112285fee802719a505b42facea40f8c67"} Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.051164 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" event={"ID":"2e231950-a365-4a82-9481-05fdac171449","Type":"ContainerStarted","Data":"4bc8b351abe79500a1634b081e0952c1dd89a39761227cfe52a7e9bfe0b207c8"} Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.054456 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29nf4" event={"ID":"185ea561-a45e-49e1-a46b-f9bf9f6d2527","Type":"ContainerStarted","Data":"ee07414de7a83d1212fd24fac006255c845d66e5f8765acbd5026e0f77d5182b"} Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.057124 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrjxk" event={"ID":"7b9a1165-24e0-4062-b805-0f8262822507","Type":"ContainerStarted","Data":"70dff26f289767b3751863d9c38507087e8b580a75adbd7af49ca49b727a95a9"} Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.066697 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x576d" podStartSLOduration=2.313199768 podStartE2EDuration="35.066681602s" podCreationTimestamp="2026-02-19 05:27:36 +0000 UTC" firstStartedPulling="2026-02-19 05:27:37.723505662 +0000 UTC m=+153.756828231" lastFinishedPulling="2026-02-19 05:28:10.476987456 +0000 UTC m=+186.510310065" observedRunningTime="2026-02-19 05:28:11.065709705 +0000 UTC m=+187.099032274" watchObservedRunningTime="2026-02-19 05:28:11.066681602 +0000 UTC m=+187.100004171" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.086955 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-29nf4" podStartSLOduration=2.135048429 podStartE2EDuration="35.086935736s" podCreationTimestamp="2026-02-19 05:27:36 +0000 UTC" firstStartedPulling="2026-02-19 05:27:37.714982598 +0000 UTC m=+153.748305167" lastFinishedPulling="2026-02-19 05:28:10.666869895 +0000 UTC m=+186.700192474" observedRunningTime="2026-02-19 05:28:11.085449106 +0000 UTC m=+187.118771685" watchObservedRunningTime="2026-02-19 05:28:11.086935736 +0000 UTC m=+187.120258315" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.117290 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.117426 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.137212 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xrjxk" podStartSLOduration=2.293232252 podStartE2EDuration="37.137190402s" podCreationTimestamp="2026-02-19 05:27:34 +0000 UTC" firstStartedPulling="2026-02-19 05:27:35.593530015 +0000 UTC m=+151.626852584" lastFinishedPulling="2026-02-19 05:28:10.437488155 +0000 UTC m=+186.470810734" observedRunningTime="2026-02-19 05:28:11.135427874 +0000 UTC m=+187.168750453" watchObservedRunningTime="2026-02-19 05:28:11.137190402 +0000 UTC m=+187.170512991" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.140695 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-q5cb2" podStartSLOduration=165.140684658 podStartE2EDuration="2m45.140684658s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:28:11.10496397 +0000 UTC m=+187.138286539" watchObservedRunningTime="2026-02-19 05:28:11.140684658 +0000 UTC m=+187.174007227" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.161139 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5q7vk" podStartSLOduration=2.270927781 podStartE2EDuration="37.161118167s" podCreationTimestamp="2026-02-19 05:27:34 +0000 UTC" firstStartedPulling="2026-02-19 05:27:35.569566439 +0000 UTC m=+151.602889008" lastFinishedPulling="2026-02-19 05:28:10.459756815 +0000 UTC m=+186.493079394" observedRunningTime="2026-02-19 05:28:11.159685858 +0000 UTC m=+187.193008417" watchObservedRunningTime="2026-02-19 05:28:11.161118167 +0000 UTC m=+187.194440736" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.218507 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.218603 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.219633 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.282159 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.328567 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.606464 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 05:28:12 crc kubenswrapper[5012]: I0219 05:28:12.066045 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6","Type":"ContainerStarted","Data":"2827b8b3cec7ebda760fcd4cccdb82fd6769198232e34770d264b12d69853428"} Feb 19 05:28:12 crc kubenswrapper[5012]: I0219 05:28:12.066354 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6","Type":"ContainerStarted","Data":"272aa842b44d1361b81ebd3418e0e6febdf25993404d5a07b1857e9a8cbdda1e"} Feb 19 05:28:12 crc kubenswrapper[5012]: I0219 05:28:12.081753 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.081724103 podStartE2EDuration="2.081724103s" podCreationTimestamp="2026-02-19 05:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:28:12.080408597 +0000 UTC m=+188.113731166" watchObservedRunningTime="2026-02-19 05:28:12.081724103 +0000 UTC m=+188.115046672" Feb 19 05:28:13 crc kubenswrapper[5012]: I0219 05:28:13.026415 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:28:13 crc kubenswrapper[5012]: I0219 05:28:13.079402 5012 generic.go:334] "Generic (PLEG): container finished" podID="dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6" containerID="2827b8b3cec7ebda760fcd4cccdb82fd6769198232e34770d264b12d69853428" exitCode=0 Feb 19 05:28:13 crc kubenswrapper[5012]: I0219 05:28:13.079455 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6","Type":"ContainerDied","Data":"2827b8b3cec7ebda760fcd4cccdb82fd6769198232e34770d264b12d69853428"} Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.349748 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.368338 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kube-api-access\") pod \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\" (UID: \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\") " Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.368436 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kubelet-dir\") pod \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\" (UID: \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\") " Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.368667 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6" (UID: "dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.378890 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6" (UID: "dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.430283 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.430385 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.470207 5012 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.470234 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.529705 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.530687 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.779735 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.779778 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.045656 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mmvm"] Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.067597 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.071082 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.095324 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.095675 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6","Type":"ContainerDied","Data":"272aa842b44d1361b81ebd3418e0e6febdf25993404d5a07b1857e9a8cbdda1e"} Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.095697 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="272aa842b44d1361b81ebd3418e0e6febdf25993404d5a07b1857e9a8cbdda1e" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.148213 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.149126 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.763927 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 05:28:15 crc kubenswrapper[5012]: E0219 05:28:15.764207 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6" containerName="pruner" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.764225 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6" containerName="pruner" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.764367 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6" containerName="pruner" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.764794 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.767165 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.770018 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.774444 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.787271 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kube-api-access\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.787403 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-var-lock\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.787457 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.889090 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kube-api-access\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.889139 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-var-lock\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.889159 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.889259 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.889271 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-var-lock\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.915772 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kube-api-access\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:16 crc kubenswrapper[5012]: I0219 05:28:16.097941 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:16 crc kubenswrapper[5012]: I0219 05:28:16.330495 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 05:28:16 crc kubenswrapper[5012]: I0219 05:28:16.343830 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:28:16 crc kubenswrapper[5012]: I0219 05:28:16.343895 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:28:16 crc kubenswrapper[5012]: I0219 05:28:16.410820 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:28:16 crc kubenswrapper[5012]: I0219 05:28:16.777252 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:28:16 crc kubenswrapper[5012]: I0219 05:28:16.778382 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:28:16 crc kubenswrapper[5012]: I0219 05:28:16.833815 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:28:17 crc kubenswrapper[5012]: I0219 05:28:17.113549 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a60ebe63-e6e8-4716-b6a7-09471bd1761c","Type":"ContainerStarted","Data":"c50ce29ccaa2a6dec6251ba3718a50f9703f02c1604926defd790f301c9095a8"} Feb 19 05:28:17 crc kubenswrapper[5012]: I0219 05:28:17.113627 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a60ebe63-e6e8-4716-b6a7-09471bd1761c","Type":"ContainerStarted","Data":"52bbe53adc0bf39915d8efea51ec4cc82fe83aee77b31b0c3c447b900626737b"} Feb 19 05:28:17 crc kubenswrapper[5012]: I0219 05:28:17.133668 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.133649097 podStartE2EDuration="2.133649097s" podCreationTimestamp="2026-02-19 05:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:28:17.133621646 +0000 UTC m=+193.166944245" watchObservedRunningTime="2026-02-19 05:28:17.133649097 +0000 UTC m=+193.166971666" Feb 19 05:28:17 crc kubenswrapper[5012]: I0219 05:28:17.169399 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:28:17 crc kubenswrapper[5012]: I0219 05:28:17.182012 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.126795 5012 generic.go:334] "Generic (PLEG): container finished" podID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerID="a95a4d514f0d6754b1714fed7c7959350d2abe5a30fa95a4004bef33fad2569c" exitCode=0 Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.126893 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xvs8" event={"ID":"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9","Type":"ContainerDied","Data":"a95a4d514f0d6754b1714fed7c7959350d2abe5a30fa95a4004bef33fad2569c"} Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.252674 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5q7vk"] Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.253008 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5q7vk" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerName="registry-server" containerID="cri-o://ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf" gracePeriod=2 Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.746743 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.947346 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-catalog-content\") pod \"6173dc70-80d4-4f9f-9129-898b2dc38692\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.948000 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-utilities\") pod \"6173dc70-80d4-4f9f-9129-898b2dc38692\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.948054 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrsrw\" (UniqueName: \"kubernetes.io/projected/6173dc70-80d4-4f9f-9129-898b2dc38692-kube-api-access-nrsrw\") pod \"6173dc70-80d4-4f9f-9129-898b2dc38692\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.948876 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-utilities" (OuterVolumeSpecName: "utilities") pod "6173dc70-80d4-4f9f-9129-898b2dc38692" (UID: "6173dc70-80d4-4f9f-9129-898b2dc38692"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.958514 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6173dc70-80d4-4f9f-9129-898b2dc38692-kube-api-access-nrsrw" (OuterVolumeSpecName: "kube-api-access-nrsrw") pod "6173dc70-80d4-4f9f-9129-898b2dc38692" (UID: "6173dc70-80d4-4f9f-9129-898b2dc38692"). InnerVolumeSpecName "kube-api-access-nrsrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.012019 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6173dc70-80d4-4f9f-9129-898b2dc38692" (UID: "6173dc70-80d4-4f9f-9129-898b2dc38692"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.050438 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.050491 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrsrw\" (UniqueName: \"kubernetes.io/projected/6173dc70-80d4-4f9f-9129-898b2dc38692-kube-api-access-nrsrw\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.050504 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.136359 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xvs8" event={"ID":"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9","Type":"ContainerStarted","Data":"bcadb8bab70733341b7bb0cee1dc27ad28111033c1f70563d157cf39fc870bc1"} Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.139688 5012 generic.go:334] "Generic (PLEG): container finished" podID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerID="ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf" exitCode=0 Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.139751 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.139751 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7vk" event={"ID":"6173dc70-80d4-4f9f-9129-898b2dc38692","Type":"ContainerDied","Data":"ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf"} Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.139881 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7vk" event={"ID":"6173dc70-80d4-4f9f-9129-898b2dc38692","Type":"ContainerDied","Data":"51fb0c10b65b4e5eeccf825cbe8bef0aec67c350bcfafc478899d702eea9c2e4"} Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.139904 5012 scope.go:117] "RemoveContainer" containerID="ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.163137 5012 scope.go:117] "RemoveContainer" containerID="87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.177130 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4xvs8" podStartSLOduration=2.016196587 podStartE2EDuration="45.177107124s" podCreationTimestamp="2026-02-19 05:27:34 +0000 UTC" firstStartedPulling="2026-02-19 05:27:35.569540188 +0000 UTC m=+151.602862757" lastFinishedPulling="2026-02-19 05:28:18.730450725 +0000 UTC m=+194.763773294" observedRunningTime="2026-02-19 05:28:19.160344815 +0000 UTC m=+195.193667394" watchObservedRunningTime="2026-02-19 05:28:19.177107124 +0000 UTC m=+195.210429703" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.181345 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5q7vk"] Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.185151 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5q7vk"] Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.209094 5012 scope.go:117] "RemoveContainer" containerID="0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.242701 5012 scope.go:117] "RemoveContainer" containerID="ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf" Feb 19 05:28:19 crc kubenswrapper[5012]: E0219 05:28:19.243540 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf\": container with ID starting with ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf not found: ID does not exist" containerID="ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.243604 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf"} err="failed to get container status \"ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf\": rpc error: code = NotFound desc = could not find container \"ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf\": container with ID starting with ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf not found: ID does not exist" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.243687 5012 scope.go:117] "RemoveContainer" containerID="87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4" Feb 19 05:28:19 crc kubenswrapper[5012]: E0219 05:28:19.244285 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4\": container with ID starting with 87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4 not found: ID does not exist" containerID="87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.244332 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4"} err="failed to get container status \"87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4\": rpc error: code = NotFound desc = could not find container \"87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4\": container with ID starting with 87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4 not found: ID does not exist" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.244353 5012 scope.go:117] "RemoveContainer" containerID="0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc" Feb 19 05:28:19 crc kubenswrapper[5012]: E0219 05:28:19.244768 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc\": container with ID starting with 0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc not found: ID does not exist" containerID="0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.244797 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc"} err="failed to get container status \"0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc\": rpc error: code = NotFound desc = could not find container \"0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc\": container with ID starting with 0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc not found: ID does not exist" Feb 19 05:28:20 crc kubenswrapper[5012]: I0219 05:28:20.448121 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x576d"] Feb 19 05:28:20 crc kubenswrapper[5012]: I0219 05:28:20.448697 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x576d" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerName="registry-server" containerID="cri-o://1f7b0db88035160e95fb281ad896b78f2deead667d95c19e81640e666f8610f7" gracePeriod=2 Feb 19 05:28:20 crc kubenswrapper[5012]: I0219 05:28:20.711439 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" path="/var/lib/kubelet/pods/6173dc70-80d4-4f9f-9129-898b2dc38692/volumes" Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.158896 5012 generic.go:334] "Generic (PLEG): container finished" podID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerID="1f7b0db88035160e95fb281ad896b78f2deead667d95c19e81640e666f8610f7" exitCode=0 Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.158948 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x576d" event={"ID":"2269b2c9-4876-43e3-85ce-9650ffec804f","Type":"ContainerDied","Data":"1f7b0db88035160e95fb281ad896b78f2deead667d95c19e81640e666f8610f7"} Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.785460 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.793011 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g9fc\" (UniqueName: \"kubernetes.io/projected/2269b2c9-4876-43e3-85ce-9650ffec804f-kube-api-access-4g9fc\") pod \"2269b2c9-4876-43e3-85ce-9650ffec804f\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.793065 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-catalog-content\") pod \"2269b2c9-4876-43e3-85ce-9650ffec804f\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.793110 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-utilities\") pod \"2269b2c9-4876-43e3-85ce-9650ffec804f\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.795555 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-utilities" (OuterVolumeSpecName: "utilities") pod "2269b2c9-4876-43e3-85ce-9650ffec804f" (UID: "2269b2c9-4876-43e3-85ce-9650ffec804f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.803756 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2269b2c9-4876-43e3-85ce-9650ffec804f-kube-api-access-4g9fc" (OuterVolumeSpecName: "kube-api-access-4g9fc") pod "2269b2c9-4876-43e3-85ce-9650ffec804f" (UID: "2269b2c9-4876-43e3-85ce-9650ffec804f"). InnerVolumeSpecName "kube-api-access-4g9fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.822122 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2269b2c9-4876-43e3-85ce-9650ffec804f" (UID: "2269b2c9-4876-43e3-85ce-9650ffec804f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.895088 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g9fc\" (UniqueName: \"kubernetes.io/projected/2269b2c9-4876-43e3-85ce-9650ffec804f-kube-api-access-4g9fc\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.895233 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.895251 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:22 crc kubenswrapper[5012]: I0219 05:28:22.168427 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x576d" event={"ID":"2269b2c9-4876-43e3-85ce-9650ffec804f","Type":"ContainerDied","Data":"e1cb3c13c7905eb67fe5b6fee6bfb21b5e93340cd8fbda0eba5b4e60709ae667"} Feb 19 05:28:22 crc kubenswrapper[5012]: I0219 05:28:22.168510 5012 scope.go:117] "RemoveContainer" containerID="1f7b0db88035160e95fb281ad896b78f2deead667d95c19e81640e666f8610f7" Feb 19 05:28:22 crc kubenswrapper[5012]: I0219 05:28:22.168561 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:28:22 crc kubenswrapper[5012]: I0219 05:28:22.189162 5012 scope.go:117] "RemoveContainer" containerID="0cfb2a088fe11f89edf830fa194013f6aaa648491c76af6a0be0b1ed87f083f2" Feb 19 05:28:22 crc kubenswrapper[5012]: I0219 05:28:22.198943 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x576d"] Feb 19 05:28:22 crc kubenswrapper[5012]: I0219 05:28:22.205735 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x576d"] Feb 19 05:28:22 crc kubenswrapper[5012]: I0219 05:28:22.208146 5012 scope.go:117] "RemoveContainer" containerID="425cdf1067d5b62c628d2a89d10d8e953e7e1cf4ee46294d6c0c129fa2655d83" Feb 19 05:28:22 crc kubenswrapper[5012]: I0219 05:28:22.710357 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" path="/var/lib/kubelet/pods/2269b2c9-4876-43e3-85ce-9650ffec804f/volumes" Feb 19 05:28:23 crc kubenswrapper[5012]: I0219 05:28:23.177126 5012 generic.go:334] "Generic (PLEG): container finished" podID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerID="cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b" exitCode=0 Feb 19 05:28:23 crc kubenswrapper[5012]: I0219 05:28:23.177226 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br86z" event={"ID":"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc","Type":"ContainerDied","Data":"cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b"} Feb 19 05:28:24 crc kubenswrapper[5012]: I0219 05:28:24.356707 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:28:24 crc kubenswrapper[5012]: I0219 05:28:24.357050 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:28:24 crc kubenswrapper[5012]: I0219 05:28:24.411725 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:28:25 crc kubenswrapper[5012]: I0219 05:28:25.188163 5012 generic.go:334] "Generic (PLEG): container finished" podID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerID="6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166" exitCode=0 Feb 19 05:28:25 crc kubenswrapper[5012]: I0219 05:28:25.188240 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48wp9" event={"ID":"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a","Type":"ContainerDied","Data":"6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166"} Feb 19 05:28:25 crc kubenswrapper[5012]: I0219 05:28:25.237955 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:28:26 crc kubenswrapper[5012]: I0219 05:28:26.198791 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br86z" event={"ID":"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc","Type":"ContainerStarted","Data":"02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8"} Feb 19 05:28:26 crc kubenswrapper[5012]: I0219 05:28:26.226314 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-br86z" podStartSLOduration=4.108732446 podStartE2EDuration="52.226279582s" podCreationTimestamp="2026-02-19 05:27:34 +0000 UTC" firstStartedPulling="2026-02-19 05:27:36.705923701 +0000 UTC m=+152.739246270" lastFinishedPulling="2026-02-19 05:28:24.823470837 +0000 UTC m=+200.856793406" observedRunningTime="2026-02-19 05:28:26.222029419 +0000 UTC m=+202.255351978" watchObservedRunningTime="2026-02-19 05:28:26.226279582 +0000 UTC m=+202.259602151" Feb 19 05:28:28 crc kubenswrapper[5012]: I0219 05:28:28.219675 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48wp9" event={"ID":"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a","Type":"ContainerStarted","Data":"dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889"} Feb 19 05:28:28 crc kubenswrapper[5012]: I0219 05:28:28.255154 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-48wp9" podStartSLOduration=2.679412905 podStartE2EDuration="51.255128056s" podCreationTimestamp="2026-02-19 05:27:37 +0000 UTC" firstStartedPulling="2026-02-19 05:27:38.739155088 +0000 UTC m=+154.772477657" lastFinishedPulling="2026-02-19 05:28:27.314870239 +0000 UTC m=+203.348192808" observedRunningTime="2026-02-19 05:28:28.252116845 +0000 UTC m=+204.285439454" watchObservedRunningTime="2026-02-19 05:28:28.255128056 +0000 UTC m=+204.288450655" Feb 19 05:28:30 crc kubenswrapper[5012]: I0219 05:28:30.238739 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprhz" event={"ID":"e45c788c-c8a0-4563-8d05-71915e390342","Type":"ContainerStarted","Data":"842ea38ab87f30dad259cf1979c7ff921a55d4d9e323ba2c8e89f149a1596602"} Feb 19 05:28:31 crc kubenswrapper[5012]: I0219 05:28:31.251719 5012 generic.go:334] "Generic (PLEG): container finished" podID="e45c788c-c8a0-4563-8d05-71915e390342" containerID="842ea38ab87f30dad259cf1979c7ff921a55d4d9e323ba2c8e89f149a1596602" exitCode=0 Feb 19 05:28:31 crc kubenswrapper[5012]: I0219 05:28:31.251778 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprhz" event={"ID":"e45c788c-c8a0-4563-8d05-71915e390342","Type":"ContainerDied","Data":"842ea38ab87f30dad259cf1979c7ff921a55d4d9e323ba2c8e89f149a1596602"} Feb 19 05:28:32 crc kubenswrapper[5012]: I0219 05:28:32.262785 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprhz" event={"ID":"e45c788c-c8a0-4563-8d05-71915e390342","Type":"ContainerStarted","Data":"4d05b281db5317fbaf4180dd6656c44165f2aee89a9fa2e17cd24d4380132350"} Feb 19 05:28:32 crc kubenswrapper[5012]: I0219 05:28:32.292911 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rprhz" podStartSLOduration=2.3351563459999998 podStartE2EDuration="55.292885563s" podCreationTimestamp="2026-02-19 05:27:37 +0000 UTC" firstStartedPulling="2026-02-19 05:27:38.736611999 +0000 UTC m=+154.769934568" lastFinishedPulling="2026-02-19 05:28:31.694341186 +0000 UTC m=+207.727663785" observedRunningTime="2026-02-19 05:28:32.291788084 +0000 UTC m=+208.325110663" watchObservedRunningTime="2026-02-19 05:28:32.292885563 +0000 UTC m=+208.326208132" Feb 19 05:28:34 crc kubenswrapper[5012]: I0219 05:28:34.981499 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-br86z" Feb 19 05:28:34 crc kubenswrapper[5012]: I0219 05:28:34.981942 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-br86z" Feb 19 05:28:35 crc kubenswrapper[5012]: I0219 05:28:35.054271 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-br86z" Feb 19 05:28:35 crc kubenswrapper[5012]: I0219 05:28:35.351712 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-br86z" Feb 19 05:28:36 crc kubenswrapper[5012]: I0219 05:28:36.652024 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-br86z"] Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.299486 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-br86z" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerName="registry-server" containerID="cri-o://02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8" gracePeriod=2 Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.560470 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.560527 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.711584 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-br86z" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.767080 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-catalog-content\") pod \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.767648 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzlc7\" (UniqueName: \"kubernetes.io/projected/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-kube-api-access-tzlc7\") pod \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.767693 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-utilities\") pod \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.769024 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-utilities" (OuterVolumeSpecName: "utilities") pod "e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" (UID: "e16bf8e1-cd8b-48fc-9726-40c1b397a6bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.777389 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-kube-api-access-tzlc7" (OuterVolumeSpecName: "kube-api-access-tzlc7") pod "e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" (UID: "e16bf8e1-cd8b-48fc-9726-40c1b397a6bc"). InnerVolumeSpecName "kube-api-access-tzlc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.814956 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" (UID: "e16bf8e1-cd8b-48fc-9726-40c1b397a6bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.869160 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.869260 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzlc7\" (UniqueName: \"kubernetes.io/projected/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-kube-api-access-tzlc7\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.869284 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.981594 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.981666 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.036638 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.307925 5012 generic.go:334] "Generic (PLEG): container finished" podID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerID="02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8" exitCode=0 Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.308009 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-br86z" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.308058 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br86z" event={"ID":"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc","Type":"ContainerDied","Data":"02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8"} Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.308140 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br86z" event={"ID":"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc","Type":"ContainerDied","Data":"be080096b804213f30565dd54118337146dcc411c16ff0c8a6962f9fd3f03e3a"} Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.308171 5012 scope.go:117] "RemoveContainer" containerID="02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.332188 5012 scope.go:117] "RemoveContainer" containerID="cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.348461 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-br86z"] Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.352421 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-br86z"] Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.377234 5012 scope.go:117] "RemoveContainer" containerID="e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.389671 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.402360 5012 scope.go:117] "RemoveContainer" containerID="02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8" Feb 19 05:28:38 crc kubenswrapper[5012]: E0219 05:28:38.403137 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8\": container with ID starting with 02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8 not found: ID does not exist" containerID="02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.403176 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8"} err="failed to get container status \"02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8\": rpc error: code = NotFound desc = could not find container \"02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8\": container with ID starting with 02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8 not found: ID does not exist" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.403208 5012 scope.go:117] "RemoveContainer" containerID="cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b" Feb 19 05:28:38 crc kubenswrapper[5012]: E0219 05:28:38.403678 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b\": container with ID starting with cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b not found: ID does not exist" containerID="cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.403747 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b"} err="failed to get container status \"cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b\": rpc error: code = NotFound desc = could not find container \"cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b\": container with ID starting with cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b not found: ID does not exist" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.403811 5012 scope.go:117] "RemoveContainer" containerID="e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56" Feb 19 05:28:38 crc kubenswrapper[5012]: E0219 05:28:38.404351 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56\": container with ID starting with e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56 not found: ID does not exist" containerID="e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.404390 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56"} err="failed to get container status \"e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56\": rpc error: code = NotFound desc = could not find container \"e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56\": container with ID starting with e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56 not found: ID does not exist" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.636934 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rprhz" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="registry-server" probeResult="failure" output=< Feb 19 05:28:38 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 05:28:38 crc kubenswrapper[5012]: > Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.716342 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" path="/var/lib/kubelet/pods/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc/volumes" Feb 19 05:28:39 crc kubenswrapper[5012]: I0219 05:28:39.852896 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-48wp9"] Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.078976 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" podUID="ce585ab5-2554-4d20-8789-cf5bfa8e45a7" containerName="oauth-openshift" containerID="cri-o://2dcd03507647b2936efc16e245313a460e479c8027de7859ce5d48daf431680d" gracePeriod=15 Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.325897 5012 generic.go:334] "Generic (PLEG): container finished" podID="ce585ab5-2554-4d20-8789-cf5bfa8e45a7" containerID="2dcd03507647b2936efc16e245313a460e479c8027de7859ce5d48daf431680d" exitCode=0 Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.326026 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" event={"ID":"ce585ab5-2554-4d20-8789-cf5bfa8e45a7","Type":"ContainerDied","Data":"2dcd03507647b2936efc16e245313a460e479c8027de7859ce5d48daf431680d"} Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.326115 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-48wp9" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerName="registry-server" containerID="cri-o://dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889" gracePeriod=2 Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.501807 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.608795 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-trusted-ca-bundle\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.608858 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-policies\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.608897 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmlnj\" (UniqueName: \"kubernetes.io/projected/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-kube-api-access-pmlnj\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.608968 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-provider-selection\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609031 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-ocp-branding-template\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609067 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-serving-cert\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609094 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-service-ca\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609129 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-idp-0-file-data\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609161 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-cliconfig\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609183 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-router-certs\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609206 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-error\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609234 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-dir\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609262 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-session\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609317 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-login\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.611295 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.611523 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.612372 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.612531 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.619159 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.619697 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.620035 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-kube-api-access-pmlnj" (OuterVolumeSpecName: "kube-api-access-pmlnj") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "kube-api-access-pmlnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.620093 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.621083 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.621371 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.621690 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.624261 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.631857 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.637635 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.711173 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.711764 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.711793 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.711826 5012 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.711856 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.711884 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.711912 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.712042 5012 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.712065 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmlnj\" (UniqueName: \"kubernetes.io/projected/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-kube-api-access-pmlnj\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.712087 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.712109 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.712133 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.712161 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.712192 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.266866 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.324454 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlfxl\" (UniqueName: \"kubernetes.io/projected/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-kube-api-access-rlfxl\") pod \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.326579 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-utilities\") pod \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.326709 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-catalog-content\") pod \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.329139 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-utilities" (OuterVolumeSpecName: "utilities") pod "7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" (UID: "7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.333714 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-kube-api-access-rlfxl" (OuterVolumeSpecName: "kube-api-access-rlfxl") pod "7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" (UID: "7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a"). InnerVolumeSpecName "kube-api-access-rlfxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.336760 5012 generic.go:334] "Generic (PLEG): container finished" podID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerID="dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889" exitCode=0 Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.336849 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.336852 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48wp9" event={"ID":"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a","Type":"ContainerDied","Data":"dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889"} Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.336933 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48wp9" event={"ID":"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a","Type":"ContainerDied","Data":"d28f8c0cd228cb43c2f0346277beec93d922e7fa5ce5493ec945b62c4230d6ab"} Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.336960 5012 scope.go:117] "RemoveContainer" containerID="dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.345365 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" event={"ID":"ce585ab5-2554-4d20-8789-cf5bfa8e45a7","Type":"ContainerDied","Data":"bdf60105a735686277da3c5b1467ac389878a76a65699e5227c68bdc76452b4e"} Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.345486 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.380826 5012 scope.go:117] "RemoveContainer" containerID="6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.383884 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mmvm"] Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.387207 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mmvm"] Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.409015 5012 scope.go:117] "RemoveContainer" containerID="beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.431785 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlfxl\" (UniqueName: \"kubernetes.io/projected/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-kube-api-access-rlfxl\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.431832 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.432467 5012 scope.go:117] "RemoveContainer" containerID="dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889" Feb 19 05:28:41 crc kubenswrapper[5012]: E0219 05:28:41.433213 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889\": container with ID starting with dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889 not found: ID does not exist" containerID="dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.433279 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889"} err="failed to get container status \"dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889\": rpc error: code = NotFound desc = could not find container \"dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889\": container with ID starting with dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889 not found: ID does not exist" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.433349 5012 scope.go:117] "RemoveContainer" containerID="6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166" Feb 19 05:28:41 crc kubenswrapper[5012]: E0219 05:28:41.433805 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166\": container with ID starting with 6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166 not found: ID does not exist" containerID="6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.433852 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166"} err="failed to get container status \"6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166\": rpc error: code = NotFound desc = could not find container \"6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166\": container with ID starting with 6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166 not found: ID does not exist" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.433881 5012 scope.go:117] "RemoveContainer" containerID="beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d" Feb 19 05:28:41 crc kubenswrapper[5012]: E0219 05:28:41.434256 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d\": container with ID starting with beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d not found: ID does not exist" containerID="beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.434295 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d"} err="failed to get container status \"beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d\": rpc error: code = NotFound desc = could not find container \"beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d\": container with ID starting with beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d not found: ID does not exist" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.434345 5012 scope.go:117] "RemoveContainer" containerID="2dcd03507647b2936efc16e245313a460e479c8027de7859ce5d48daf431680d" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.504122 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" (UID: "7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.533685 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.683943 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-48wp9"] Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.689619 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-48wp9"] Feb 19 05:28:42 crc kubenswrapper[5012]: I0219 05:28:42.712555 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" path="/var/lib/kubelet/pods/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a/volumes" Feb 19 05:28:42 crc kubenswrapper[5012]: I0219 05:28:42.714653 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce585ab5-2554-4d20-8789-cf5bfa8e45a7" path="/var/lib/kubelet/pods/ce585ab5-2554-4d20-8789-cf5bfa8e45a7/volumes" Feb 19 05:28:44 crc kubenswrapper[5012]: I0219 05:28:44.430547 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:28:44 crc kubenswrapper[5012]: I0219 05:28:44.430625 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:28:44 crc kubenswrapper[5012]: I0219 05:28:44.430706 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:28:44 crc kubenswrapper[5012]: I0219 05:28:44.431818 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:28:44 crc kubenswrapper[5012]: I0219 05:28:44.431925 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049" gracePeriod=600 Feb 19 05:28:45 crc kubenswrapper[5012]: I0219 05:28:45.381494 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049" exitCode=0 Feb 19 05:28:45 crc kubenswrapper[5012]: I0219 05:28:45.381655 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049"} Feb 19 05:28:45 crc kubenswrapper[5012]: I0219 05:28:45.382383 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"f28c70f18d16a390f7b96cc5399b8c6c7031b7f62ee2bccc4e33b9c7c28fc6a0"} Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.332569 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg"] Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333465 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce585ab5-2554-4d20-8789-cf5bfa8e45a7" containerName="oauth-openshift" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333506 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce585ab5-2554-4d20-8789-cf5bfa8e45a7" containerName="oauth-openshift" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333531 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333547 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333569 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerName="extract-content" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333586 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerName="extract-content" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333612 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333630 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333656 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerName="extract-utilities" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333672 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerName="extract-utilities" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333705 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerName="extract-utilities" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333722 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerName="extract-utilities" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333744 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333761 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333787 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333803 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333827 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerName="extract-content" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333843 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerName="extract-content" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333862 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerName="extract-utilities" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333877 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerName="extract-utilities" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333901 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerName="extract-content" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333918 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerName="extract-content" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333942 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerName="extract-content" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333957 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerName="extract-content" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333981 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerName="extract-utilities" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333996 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerName="extract-utilities" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.334214 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.334242 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.334265 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.334283 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.334339 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce585ab5-2554-4d20-8789-cf5bfa8e45a7" containerName="oauth-openshift" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.335100 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.341226 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.341461 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.341621 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.341799 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.341808 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.342018 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.343382 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.343549 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.344390 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.354244 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.354515 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.368969 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg"] Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.392695 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.394213 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.394896 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399281 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-session\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399413 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399464 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399499 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-login\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399540 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399572 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-error\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399605 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399897 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399953 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.400068 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/259d2430-9728-4214-902c-aeafb7a74034-audit-dir\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.400135 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-audit-policies\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.400252 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhxl\" (UniqueName: \"kubernetes.io/projected/259d2430-9728-4214-902c-aeafb7a74034-kube-api-access-lqhxl\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.400350 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.400375 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.402605 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502204 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-audit-policies\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502270 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqhxl\" (UniqueName: \"kubernetes.io/projected/259d2430-9728-4214-902c-aeafb7a74034-kube-api-access-lqhxl\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502333 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502358 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502392 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-session\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502425 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502460 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502489 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-login\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502516 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502539 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-error\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502564 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502610 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502648 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502699 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/259d2430-9728-4214-902c-aeafb7a74034-audit-dir\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502789 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/259d2430-9728-4214-902c-aeafb7a74034-audit-dir\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.503566 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.503707 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-audit-policies\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.504415 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.506062 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.509078 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.510063 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.510815 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-login\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.511067 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-session\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.511653 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.511920 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.509832 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.516636 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-error\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.526719 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqhxl\" (UniqueName: \"kubernetes.io/projected/259d2430-9728-4214-902c-aeafb7a74034-kube-api-access-lqhxl\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.684564 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.930242 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg"] Feb 19 05:28:46 crc kubenswrapper[5012]: W0219 05:28:46.935457 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod259d2430_9728_4214_902c_aeafb7a74034.slice/crio-2069268689d599dc4f86a7ada870241590d4b4feb7409dfc43d39ce5ecc857aa WatchSource:0}: Error finding container 2069268689d599dc4f86a7ada870241590d4b4feb7409dfc43d39ce5ecc857aa: Status 404 returned error can't find the container with id 2069268689d599dc4f86a7ada870241590d4b4feb7409dfc43d39ce5ecc857aa Feb 19 05:28:47 crc kubenswrapper[5012]: I0219 05:28:47.404704 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" event={"ID":"259d2430-9728-4214-902c-aeafb7a74034","Type":"ContainerStarted","Data":"cbefc13777846cb6503b214344af4b5d9528796f30de7fa0f13cde87379f05ff"} Feb 19 05:28:47 crc kubenswrapper[5012]: I0219 05:28:47.405058 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" event={"ID":"259d2430-9728-4214-902c-aeafb7a74034","Type":"ContainerStarted","Data":"2069268689d599dc4f86a7ada870241590d4b4feb7409dfc43d39ce5ecc857aa"} Feb 19 05:28:47 crc kubenswrapper[5012]: I0219 05:28:47.406630 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:47 crc kubenswrapper[5012]: I0219 05:28:47.440804 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" podStartSLOduration=32.440781022 podStartE2EDuration="32.440781022s" podCreationTimestamp="2026-02-19 05:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:28:47.440106124 +0000 UTC m=+223.473428733" watchObservedRunningTime="2026-02-19 05:28:47.440781022 +0000 UTC m=+223.474103621" Feb 19 05:28:47 crc kubenswrapper[5012]: I0219 05:28:47.634594 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:28:47 crc kubenswrapper[5012]: I0219 05:28:47.706815 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:28:47 crc kubenswrapper[5012]: I0219 05:28:47.776072 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.550119 5012 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.551004 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135" gracePeriod=15 Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.551135 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0" gracePeriod=15 Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.551192 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2" gracePeriod=15 Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.551187 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83" gracePeriod=15 Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.551289 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9" gracePeriod=15 Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.551951 5012 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 05:28:54 crc kubenswrapper[5012]: E0219 05:28:54.552179 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552199 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 05:28:54 crc kubenswrapper[5012]: E0219 05:28:54.552219 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552229 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 05:28:54 crc kubenswrapper[5012]: E0219 05:28:54.552239 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552249 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 05:28:54 crc kubenswrapper[5012]: E0219 05:28:54.552265 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552273 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 05:28:54 crc kubenswrapper[5012]: E0219 05:28:54.552284 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552292 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 05:28:54 crc kubenswrapper[5012]: E0219 05:28:54.552320 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552330 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 05:28:54 crc kubenswrapper[5012]: E0219 05:28:54.552339 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552347 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552463 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552474 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552491 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552502 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552514 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552524 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.554124 5012 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.554680 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.558336 5012 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.626350 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.652936 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.652991 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.653036 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.653089 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.653121 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.653141 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.653174 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.653200 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.708471 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755048 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755151 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755219 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755227 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755332 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755378 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755428 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755478 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755498 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755588 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755625 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755649 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755699 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755825 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.756158 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.756322 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.926409 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: E0219 05:28:54.960076 5012 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18958eb4a555e533 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 05:28:54.959285555 +0000 UTC m=+230.992608174,LastTimestamp:2026-02-19 05:28:54.959285555 +0000 UTC m=+230.992608174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 05:28:55 crc kubenswrapper[5012]: E0219 05:28:55.378025 5012 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: E0219 05:28:55.379190 5012 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: E0219 05:28:55.379697 5012 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: E0219 05:28:55.380098 5012 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: E0219 05:28:55.380548 5012 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.380601 5012 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 05:28:55 crc kubenswrapper[5012]: E0219 05:28:55.380950 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="200ms" Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.463589 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.465421 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.466549 5012 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0" exitCode=0 Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.466599 5012 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9" exitCode=0 Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.466617 5012 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83" exitCode=0 Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.466636 5012 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2" exitCode=2 Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.466659 5012 scope.go:117] "RemoveContainer" containerID="093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de" Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.470932 5012 generic.go:334] "Generic (PLEG): container finished" podID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" containerID="c50ce29ccaa2a6dec6251ba3718a50f9703f02c1604926defd790f301c9095a8" exitCode=0 Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.471054 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a60ebe63-e6e8-4716-b6a7-09471bd1761c","Type":"ContainerDied","Data":"c50ce29ccaa2a6dec6251ba3718a50f9703f02c1604926defd790f301c9095a8"} Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.471998 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.472599 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.473927 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b6f9cb760466cabd1e0a03b9e7b38403b65eda373e574649db72eb2355616bd8"} Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.473974 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7aae5124379fb33aea819bacdd31748ede0457373ca9eeb45432122370cef8f9"} Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.475201 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.475714 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: E0219 05:28:55.581592 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="400ms" Feb 19 05:28:55 crc kubenswrapper[5012]: E0219 05:28:55.982479 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="800ms" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.483454 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.763376 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.764868 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.765366 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:56 crc kubenswrapper[5012]: E0219 05:28:56.784064 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="1.6s" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.891351 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kube-api-access\") pod \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.891491 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-var-lock\") pod \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.891544 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kubelet-dir\") pod \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.891639 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-var-lock" (OuterVolumeSpecName: "var-lock") pod "a60ebe63-e6e8-4716-b6a7-09471bd1761c" (UID: "a60ebe63-e6e8-4716-b6a7-09471bd1761c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.891736 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a60ebe63-e6e8-4716-b6a7-09471bd1761c" (UID: "a60ebe63-e6e8-4716-b6a7-09471bd1761c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.892032 5012 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.892051 5012 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.896875 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a60ebe63-e6e8-4716-b6a7-09471bd1761c" (UID: "a60ebe63-e6e8-4716-b6a7-09471bd1761c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.992785 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.993810 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.994486 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.995209 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.995851 5012 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.996447 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.093904 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.094049 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.094411 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.094473 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.094563 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.094707 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.095050 5012 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.095073 5012 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.095081 5012 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:57 crc kubenswrapper[5012]: E0219 05:28:57.260751 5012 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18958eb4a555e533 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 05:28:54.959285555 +0000 UTC m=+230.992608174,LastTimestamp:2026-02-19 05:28:54.959285555 +0000 UTC m=+230.992608174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.495589 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.497212 5012 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135" exitCode=0 Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.497272 5012 scope.go:117] "RemoveContainer" containerID="196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.497398 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.509889 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a60ebe63-e6e8-4716-b6a7-09471bd1761c","Type":"ContainerDied","Data":"52bbe53adc0bf39915d8efea51ec4cc82fe83aee77b31b0c3c447b900626737b"} Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.509937 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52bbe53adc0bf39915d8efea51ec4cc82fe83aee77b31b0c3c447b900626737b" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.510021 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.520366 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.520602 5012 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.520789 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.534374 5012 scope.go:117] "RemoveContainer" containerID="a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.537048 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.537285 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.537477 5012 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.560493 5012 scope.go:117] "RemoveContainer" containerID="d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.588374 5012 scope.go:117] "RemoveContainer" containerID="526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.621973 5012 scope.go:117] "RemoveContainer" containerID="c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.645096 5012 scope.go:117] "RemoveContainer" containerID="fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.664886 5012 scope.go:117] "RemoveContainer" containerID="196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0" Feb 19 05:28:57 crc kubenswrapper[5012]: E0219 05:28:57.665528 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\": container with ID starting with 196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0 not found: ID does not exist" containerID="196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.665599 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0"} err="failed to get container status \"196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\": rpc error: code = NotFound desc = could not find container \"196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\": container with ID starting with 196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0 not found: ID does not exist" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.665641 5012 scope.go:117] "RemoveContainer" containerID="a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9" Feb 19 05:28:57 crc kubenswrapper[5012]: E0219 05:28:57.666068 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\": container with ID starting with a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9 not found: ID does not exist" containerID="a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.666153 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9"} err="failed to get container status \"a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\": rpc error: code = NotFound desc = could not find container \"a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\": container with ID starting with a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9 not found: ID does not exist" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.666215 5012 scope.go:117] "RemoveContainer" containerID="d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83" Feb 19 05:28:57 crc kubenswrapper[5012]: E0219 05:28:57.666633 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\": container with ID starting with d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83 not found: ID does not exist" containerID="d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.666685 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83"} err="failed to get container status \"d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\": rpc error: code = NotFound desc = could not find container \"d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\": container with ID starting with d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83 not found: ID does not exist" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.666718 5012 scope.go:117] "RemoveContainer" containerID="526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2" Feb 19 05:28:57 crc kubenswrapper[5012]: E0219 05:28:57.667076 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\": container with ID starting with 526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2 not found: ID does not exist" containerID="526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.667161 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2"} err="failed to get container status \"526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\": rpc error: code = NotFound desc = could not find container \"526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\": container with ID starting with 526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2 not found: ID does not exist" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.667230 5012 scope.go:117] "RemoveContainer" containerID="c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135" Feb 19 05:28:57 crc kubenswrapper[5012]: E0219 05:28:57.667942 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\": container with ID starting with c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135 not found: ID does not exist" containerID="c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.667981 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135"} err="failed to get container status \"c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\": rpc error: code = NotFound desc = could not find container \"c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\": container with ID starting with c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135 not found: ID does not exist" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.668002 5012 scope.go:117] "RemoveContainer" containerID="fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c" Feb 19 05:28:57 crc kubenswrapper[5012]: E0219 05:28:57.668505 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\": container with ID starting with fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c not found: ID does not exist" containerID="fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.668552 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c"} err="failed to get container status \"fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\": rpc error: code = NotFound desc = could not find container \"fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\": container with ID starting with fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c not found: ID does not exist" Feb 19 05:28:58 crc kubenswrapper[5012]: E0219 05:28:58.385431 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="3.2s" Feb 19 05:28:58 crc kubenswrapper[5012]: I0219 05:28:58.710914 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 19 05:29:01 crc kubenswrapper[5012]: E0219 05:29:01.587380 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="6.4s" Feb 19 05:29:04 crc kubenswrapper[5012]: I0219 05:29:04.706910 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:04 crc kubenswrapper[5012]: I0219 05:29:04.707633 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:05 crc kubenswrapper[5012]: E0219 05:29:05.783396 5012 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" volumeName="registry-storage" Feb 19 05:29:07 crc kubenswrapper[5012]: E0219 05:29:07.261500 5012 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18958eb4a555e533 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 05:28:54.959285555 +0000 UTC m=+230.992608174,LastTimestamp:2026-02-19 05:28:54.959285555 +0000 UTC m=+230.992608174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 05:29:07 crc kubenswrapper[5012]: E0219 05:29:07.988988 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="7s" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.604168 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.604281 5012 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583" exitCode=1 Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.604384 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583"} Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.605421 5012 scope.go:117] "RemoveContainer" containerID="e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.606003 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.607194 5012 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.607770 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.704060 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.711580 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.712064 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.712557 5012 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.728501 5012 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.728541 5012 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:09 crc kubenswrapper[5012]: E0219 05:29:09.729170 5012 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.729728 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:09 crc kubenswrapper[5012]: W0219 05:29:09.770846 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-145c1437154aa1369dea3e1b974d5651a0ec0495c0efacd59b89fa5850af9323 WatchSource:0}: Error finding container 145c1437154aa1369dea3e1b974d5651a0ec0495c0efacd59b89fa5850af9323: Status 404 returned error can't find the container with id 145c1437154aa1369dea3e1b974d5651a0ec0495c0efacd59b89fa5850af9323 Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.611254 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.614402 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.614553 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7d416dde1b0d46276be91907a124098c4e88b5ed6b05a4907bd5048f78aeba0e"} Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.615676 5012 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.616204 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.616692 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.617192 5012 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="deb3b55ddd1daaca601a5db6b545862df01aec3ccbc7dc9516c84175845d0612" exitCode=0 Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.617254 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"deb3b55ddd1daaca601a5db6b545862df01aec3ccbc7dc9516c84175845d0612"} Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.620576 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"145c1437154aa1369dea3e1b974d5651a0ec0495c0efacd59b89fa5850af9323"} Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.621121 5012 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.621156 5012 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.621697 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:10 crc kubenswrapper[5012]: E0219 05:29:10.621724 5012 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.622202 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.622776 5012 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:11 crc kubenswrapper[5012]: I0219 05:29:11.631801 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e711166a6f96dfd9b4c99d1b232e6415693e0391069db315fadb15f670110255"} Feb 19 05:29:11 crc kubenswrapper[5012]: I0219 05:29:11.631872 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"79199297292ba76371bcef8e3dbc8db37207a04b93b48a0a34f0d3003b1bf1b5"} Feb 19 05:29:11 crc kubenswrapper[5012]: I0219 05:29:11.631893 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cf6f0c3167844de76fec185a1b785bbde18fa1b90f768354082420d36a34d37c"} Feb 19 05:29:12 crc kubenswrapper[5012]: I0219 05:29:12.643958 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bcc26507116cf9054581a75d1d0606ee26f13eb9f3e765fb3afbeb4a6cea69dc"} Feb 19 05:29:12 crc kubenswrapper[5012]: I0219 05:29:12.644344 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d0a3474e0b7a76c454a495c38625c8965e1c308c8e645eb8311e3810660a1541"} Feb 19 05:29:12 crc kubenswrapper[5012]: I0219 05:29:12.644647 5012 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:12 crc kubenswrapper[5012]: I0219 05:29:12.644665 5012 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:12 crc kubenswrapper[5012]: I0219 05:29:12.644895 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:14 crc kubenswrapper[5012]: I0219 05:29:14.730424 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:14 crc kubenswrapper[5012]: I0219 05:29:14.730920 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:14 crc kubenswrapper[5012]: I0219 05:29:14.741760 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:17 crc kubenswrapper[5012]: I0219 05:29:17.266296 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:29:17 crc kubenswrapper[5012]: I0219 05:29:17.660438 5012 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:17 crc kubenswrapper[5012]: I0219 05:29:17.759706 5012 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="df47b349-1342-47a4-a6c3-ed3082d2e576" Feb 19 05:29:18 crc kubenswrapper[5012]: I0219 05:29:18.686843 5012 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:18 crc kubenswrapper[5012]: I0219 05:29:18.686879 5012 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:18 crc kubenswrapper[5012]: I0219 05:29:18.690585 5012 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="df47b349-1342-47a4-a6c3-ed3082d2e576" Feb 19 05:29:18 crc kubenswrapper[5012]: I0219 05:29:18.692898 5012 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://cf6f0c3167844de76fec185a1b785bbde18fa1b90f768354082420d36a34d37c" Feb 19 05:29:18 crc kubenswrapper[5012]: I0219 05:29:18.692930 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:19 crc kubenswrapper[5012]: I0219 05:29:19.694166 5012 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:19 crc kubenswrapper[5012]: I0219 05:29:19.694229 5012 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:19 crc kubenswrapper[5012]: I0219 05:29:19.698509 5012 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="df47b349-1342-47a4-a6c3-ed3082d2e576" Feb 19 05:29:20 crc kubenswrapper[5012]: I0219 05:29:20.611739 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:29:20 crc kubenswrapper[5012]: I0219 05:29:20.611993 5012 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 05:29:20 crc kubenswrapper[5012]: I0219 05:29:20.612050 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 05:29:27 crc kubenswrapper[5012]: I0219 05:29:27.198029 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 05:29:27 crc kubenswrapper[5012]: I0219 05:29:27.333518 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 05:29:27 crc kubenswrapper[5012]: I0219 05:29:27.930879 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 05:29:28 crc kubenswrapper[5012]: I0219 05:29:28.441124 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 05:29:28 crc kubenswrapper[5012]: I0219 05:29:28.483356 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 05:29:28 crc kubenswrapper[5012]: I0219 05:29:28.495383 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 05:29:28 crc kubenswrapper[5012]: I0219 05:29:28.497346 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 05:29:28 crc kubenswrapper[5012]: I0219 05:29:28.558934 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 05:29:28 crc kubenswrapper[5012]: I0219 05:29:28.588054 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 05:29:28 crc kubenswrapper[5012]: I0219 05:29:28.629480 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 05:29:28 crc kubenswrapper[5012]: I0219 05:29:28.693428 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 05:29:29 crc kubenswrapper[5012]: I0219 05:29:29.165823 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 05:29:29 crc kubenswrapper[5012]: I0219 05:29:29.260413 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 05:29:29 crc kubenswrapper[5012]: I0219 05:29:29.332055 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 05:29:29 crc kubenswrapper[5012]: I0219 05:29:29.620460 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 05:29:29 crc kubenswrapper[5012]: I0219 05:29:29.677662 5012 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 05:29:29 crc kubenswrapper[5012]: I0219 05:29:29.969170 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.020614 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.102554 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.268860 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.381806 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.558739 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.611899 5012 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.611983 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.613893 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.750925 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.798560 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.911930 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.988160 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.276912 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.290182 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.299468 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.331795 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.337459 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.423598 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.475426 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.485891 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.601831 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.623666 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.692358 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.800371 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.814022 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.927854 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.953428 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.017214 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.208957 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.260676 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.366154 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.399505 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.448805 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.479466 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.484821 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.564919 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.653929 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.665074 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.757243 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.961986 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.981515 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.138001 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.223132 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.279398 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.301969 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.324573 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.329648 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.341537 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.383190 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.394737 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.470857 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.612957 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.634969 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.639430 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.661118 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.898409 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.904459 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.946234 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.012271 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.123423 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.176633 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.304249 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.314109 5012 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.332493 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.419802 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.446891 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.467916 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.510431 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.511532 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.526338 5012 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.540929 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.596040 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.770799 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.816895 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.820504 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.823521 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.850295 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.868008 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.013626 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.015406 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.066396 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.074600 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.082946 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.218510 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.224797 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.339977 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.516786 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.523920 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.587848 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.652269 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.721538 5012 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.731556 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.732495 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.812980 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.828605 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.854986 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.860542 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.870891 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.881527 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.974007 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.249229 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.262863 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.464746 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.523361 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.553776 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.605840 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.609603 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.663196 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.709473 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.738341 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.751201 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.774447 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.796197 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.885980 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.908388 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.926692 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.952671 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.969990 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.981395 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.981986 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.995029 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.007466 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.067519 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.071839 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.096379 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.211744 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.269046 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.328231 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.470268 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.564180 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.722787 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.801528 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.806512 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.818455 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.932574 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.934502 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.941524 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.947398 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.022760 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.123643 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.177835 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.201837 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.269183 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.288117 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.347915 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.419179 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.487667 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.497355 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.497847 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.569551 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.574787 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.673949 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.706874 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.804397 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.812867 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.833540 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.880561 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.880814 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.913731 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.935151 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.954508 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.959113 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.014122 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.054098 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.184697 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.258441 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.305469 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.388873 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.409247 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.442027 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.470201 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.495806 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.570513 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.651631 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.679481 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.726837 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.789214 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.792983 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.802865 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.979559 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.202945 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.339999 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.342192 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.431414 5012 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.613349 5012 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.613458 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.613621 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.614697 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"7d416dde1b0d46276be91907a124098c4e88b5ed6b05a4907bd5048f78aeba0e"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.614926 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://7d416dde1b0d46276be91907a124098c4e88b5ed6b05a4907bd5048f78aeba0e" gracePeriod=30 Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.618704 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.756059 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.816859 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.841386 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.924903 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.963480 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.024185 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.055145 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.081906 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.164387 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.272028 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.275856 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.308027 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.326062 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.397247 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.420871 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.469867 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.500729 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.528878 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.675318 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.722285 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.772566 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.781080 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.806093 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.813903 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.050766 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.091021 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.107559 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.181456 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.194288 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.302128 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.342741 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.392032 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.557418 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.574185 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.619277 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.686826 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.135581 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.187402 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.709911 5012 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.715419 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=49.71539245 podStartE2EDuration="49.71539245s" podCreationTimestamp="2026-02-19 05:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:29:17.698443305 +0000 UTC m=+253.731765904" watchObservedRunningTime="2026-02-19 05:29:43.71539245 +0000 UTC m=+279.748715059" Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.717092 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.717147 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.724538 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.770610 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.770553583 podStartE2EDuration="26.770553583s" podCreationTimestamp="2026-02-19 05:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:29:43.744679852 +0000 UTC m=+279.778002421" watchObservedRunningTime="2026-02-19 05:29:43.770553583 +0000 UTC m=+279.803876192" Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.802832 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.807834 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 05:29:50 crc kubenswrapper[5012]: I0219 05:29:50.538121 5012 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 05:29:50 crc kubenswrapper[5012]: I0219 05:29:50.539763 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b6f9cb760466cabd1e0a03b9e7b38403b65eda373e574649db72eb2355616bd8" gracePeriod=5 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.428184 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4xvs8"] Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.430734 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4xvs8" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerName="registry-server" containerID="cri-o://bcadb8bab70733341b7bb0cee1dc27ad28111033c1f70563d157cf39fc870bc1" gracePeriod=30 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.436999 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xrjxk"] Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.440949 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xrjxk" podUID="7b9a1165-24e0-4062-b805-0f8262822507" containerName="registry-server" containerID="cri-o://70dff26f289767b3751863d9c38507087e8b580a75adbd7af49ca49b727a95a9" gracePeriod=30 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.458528 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwd8z"] Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.458974 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" podUID="562c18aa-5aed-4f1e-95f5-da1fe7c02523" containerName="marketplace-operator" containerID="cri-o://48aada40317b892d9a223a57a3ac3503ec0ff8bc3ff5df783ac9de195fd3495f" gracePeriod=30 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.470765 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-29nf4"] Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.471462 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-29nf4" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerName="registry-server" containerID="cri-o://ee07414de7a83d1212fd24fac006255c845d66e5f8765acbd5026e0f77d5182b" gracePeriod=30 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.496357 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rprhz"] Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.496718 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rprhz" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="registry-server" containerID="cri-o://4d05b281db5317fbaf4180dd6656c44165f2aee89a9fa2e17cd24d4380132350" gracePeriod=30 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.955973 5012 generic.go:334] "Generic (PLEG): container finished" podID="562c18aa-5aed-4f1e-95f5-da1fe7c02523" containerID="48aada40317b892d9a223a57a3ac3503ec0ff8bc3ff5df783ac9de195fd3495f" exitCode=0 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.956067 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" event={"ID":"562c18aa-5aed-4f1e-95f5-da1fe7c02523","Type":"ContainerDied","Data":"48aada40317b892d9a223a57a3ac3503ec0ff8bc3ff5df783ac9de195fd3495f"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.956479 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" event={"ID":"562c18aa-5aed-4f1e-95f5-da1fe7c02523","Type":"ContainerDied","Data":"a4304d16005995731fefdc081d0677adb43c535c36d93bdb10216b67e4aa8631"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.956499 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4304d16005995731fefdc081d0677adb43c535c36d93bdb10216b67e4aa8631" Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.960889 5012 generic.go:334] "Generic (PLEG): container finished" podID="e45c788c-c8a0-4563-8d05-71915e390342" containerID="4d05b281db5317fbaf4180dd6656c44165f2aee89a9fa2e17cd24d4380132350" exitCode=0 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.961011 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprhz" event={"ID":"e45c788c-c8a0-4563-8d05-71915e390342","Type":"ContainerDied","Data":"4d05b281db5317fbaf4180dd6656c44165f2aee89a9fa2e17cd24d4380132350"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.963979 5012 generic.go:334] "Generic (PLEG): container finished" podID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerID="ee07414de7a83d1212fd24fac006255c845d66e5f8765acbd5026e0f77d5182b" exitCode=0 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.964064 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29nf4" event={"ID":"185ea561-a45e-49e1-a46b-f9bf9f6d2527","Type":"ContainerDied","Data":"ee07414de7a83d1212fd24fac006255c845d66e5f8765acbd5026e0f77d5182b"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.964143 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29nf4" event={"ID":"185ea561-a45e-49e1-a46b-f9bf9f6d2527","Type":"ContainerDied","Data":"d6248eb1f07ab21d429bccf4d50cb020bfc4631adebda71b1fd6e99e737ec5c4"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.964165 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6248eb1f07ab21d429bccf4d50cb020bfc4631adebda71b1fd6e99e737ec5c4" Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.966903 5012 generic.go:334] "Generic (PLEG): container finished" podID="7b9a1165-24e0-4062-b805-0f8262822507" containerID="70dff26f289767b3751863d9c38507087e8b580a75adbd7af49ca49b727a95a9" exitCode=0 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.967010 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrjxk" event={"ID":"7b9a1165-24e0-4062-b805-0f8262822507","Type":"ContainerDied","Data":"70dff26f289767b3751863d9c38507087e8b580a75adbd7af49ca49b727a95a9"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.967089 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrjxk" event={"ID":"7b9a1165-24e0-4062-b805-0f8262822507","Type":"ContainerDied","Data":"1dbae8515e388d77b201dd3b6779da7c54d4915cbd633620f81733f1a3b7142f"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.967125 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dbae8515e388d77b201dd3b6779da7c54d4915cbd633620f81733f1a3b7142f" Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.969794 5012 generic.go:334] "Generic (PLEG): container finished" podID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerID="bcadb8bab70733341b7bb0cee1dc27ad28111033c1f70563d157cf39fc870bc1" exitCode=0 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.969906 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xvs8" event={"ID":"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9","Type":"ContainerDied","Data":"bcadb8bab70733341b7bb0cee1dc27ad28111033c1f70563d157cf39fc870bc1"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.969972 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xvs8" event={"ID":"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9","Type":"ContainerDied","Data":"b8c85544c6a863422777f31be4cc9ef9cf579d3d709dec29ffff9c467cf857f1"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.970005 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8c85544c6a863422777f31be4cc9ef9cf579d3d709dec29ffff9c467cf857f1" Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.972340 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.972386 5012 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b6f9cb760466cabd1e0a03b9e7b38403b65eda373e574649db72eb2355616bd8" exitCode=137 Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.036732 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.053702 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.065132 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.077417 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084209 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v76p\" (UniqueName: \"kubernetes.io/projected/562c18aa-5aed-4f1e-95f5-da1fe7c02523-kube-api-access-4v76p\") pod \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084276 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-utilities\") pod \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084317 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-utilities\") pod \"7b9a1165-24e0-4062-b805-0f8262822507\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084350 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-utilities\") pod \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084414 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-catalog-content\") pod \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084443 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-trusted-ca\") pod \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084541 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plzvr\" (UniqueName: \"kubernetes.io/projected/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-kube-api-access-plzvr\") pod \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084584 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz86t\" (UniqueName: \"kubernetes.io/projected/185ea561-a45e-49e1-a46b-f9bf9f6d2527-kube-api-access-fz86t\") pod \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084642 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-operator-metrics\") pod \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084666 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-catalog-content\") pod \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084706 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-catalog-content\") pod \"7b9a1165-24e0-4062-b805-0f8262822507\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084725 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtwg8\" (UniqueName: \"kubernetes.io/projected/7b9a1165-24e0-4062-b805-0f8262822507-kube-api-access-gtwg8\") pod \"7b9a1165-24e0-4062-b805-0f8262822507\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.086631 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "562c18aa-5aed-4f1e-95f5-da1fe7c02523" (UID: "562c18aa-5aed-4f1e-95f5-da1fe7c02523"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.088090 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-utilities" (OuterVolumeSpecName: "utilities") pod "a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" (UID: "a7ce4c2b-d3b7-4881-91fe-49f7103f12b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.088183 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-utilities" (OuterVolumeSpecName: "utilities") pod "185ea561-a45e-49e1-a46b-f9bf9f6d2527" (UID: "185ea561-a45e-49e1-a46b-f9bf9f6d2527"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.088504 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.090200 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-utilities" (OuterVolumeSpecName: "utilities") pod "7b9a1165-24e0-4062-b805-0f8262822507" (UID: "7b9a1165-24e0-4062-b805-0f8262822507"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.092616 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b9a1165-24e0-4062-b805-0f8262822507-kube-api-access-gtwg8" (OuterVolumeSpecName: "kube-api-access-gtwg8") pod "7b9a1165-24e0-4062-b805-0f8262822507" (UID: "7b9a1165-24e0-4062-b805-0f8262822507"). InnerVolumeSpecName "kube-api-access-gtwg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.093992 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/185ea561-a45e-49e1-a46b-f9bf9f6d2527-kube-api-access-fz86t" (OuterVolumeSpecName: "kube-api-access-fz86t") pod "185ea561-a45e-49e1-a46b-f9bf9f6d2527" (UID: "185ea561-a45e-49e1-a46b-f9bf9f6d2527"). InnerVolumeSpecName "kube-api-access-fz86t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.094641 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "562c18aa-5aed-4f1e-95f5-da1fe7c02523" (UID: "562c18aa-5aed-4f1e-95f5-da1fe7c02523"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.096663 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-kube-api-access-plzvr" (OuterVolumeSpecName: "kube-api-access-plzvr") pod "a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" (UID: "a7ce4c2b-d3b7-4881-91fe-49f7103f12b9"). InnerVolumeSpecName "kube-api-access-plzvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.098064 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.098162 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.101049 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/562c18aa-5aed-4f1e-95f5-da1fe7c02523-kube-api-access-4v76p" (OuterVolumeSpecName: "kube-api-access-4v76p") pod "562c18aa-5aed-4f1e-95f5-da1fe7c02523" (UID: "562c18aa-5aed-4f1e-95f5-da1fe7c02523"). InnerVolumeSpecName "kube-api-access-4v76p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.143981 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "185ea561-a45e-49e1-a46b-f9bf9f6d2527" (UID: "185ea561-a45e-49e1-a46b-f9bf9f6d2527"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.170360 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" (UID: "a7ce4c2b-d3b7-4881-91fe-49f7103f12b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.173290 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b9a1165-24e0-4062-b805-0f8262822507" (UID: "7b9a1165-24e0-4062-b805-0f8262822507"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.185928 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-utilities\") pod \"e45c788c-c8a0-4563-8d05-71915e390342\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186002 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186031 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186054 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186080 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186125 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186183 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186209 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr49f\" (UniqueName: \"kubernetes.io/projected/e45c788c-c8a0-4563-8d05-71915e390342-kube-api-access-pr49f\") pod \"e45c788c-c8a0-4563-8d05-71915e390342\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186269 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186291 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186464 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-catalog-content\") pod \"e45c788c-c8a0-4563-8d05-71915e390342\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186715 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-utilities" (OuterVolumeSpecName: "utilities") pod "e45c788c-c8a0-4563-8d05-71915e390342" (UID: "e45c788c-c8a0-4563-8d05-71915e390342"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187236 5012 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187275 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plzvr\" (UniqueName: \"kubernetes.io/projected/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-kube-api-access-plzvr\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187296 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz86t\" (UniqueName: \"kubernetes.io/projected/185ea561-a45e-49e1-a46b-f9bf9f6d2527-kube-api-access-fz86t\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187340 5012 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187360 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187378 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187395 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtwg8\" (UniqueName: \"kubernetes.io/projected/7b9a1165-24e0-4062-b805-0f8262822507-kube-api-access-gtwg8\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187415 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187436 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v76p\" (UniqueName: \"kubernetes.io/projected/562c18aa-5aed-4f1e-95f5-da1fe7c02523-kube-api-access-4v76p\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187457 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187476 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187495 5012 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187518 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187535 5012 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187552 5012 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187568 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187620 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.190620 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e45c788c-c8a0-4563-8d05-71915e390342-kube-api-access-pr49f" (OuterVolumeSpecName: "kube-api-access-pr49f") pod "e45c788c-c8a0-4563-8d05-71915e390342" (UID: "e45c788c-c8a0-4563-8d05-71915e390342"). InnerVolumeSpecName "kube-api-access-pr49f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.194263 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.289139 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr49f\" (UniqueName: \"kubernetes.io/projected/e45c788c-c8a0-4563-8d05-71915e390342-kube-api-access-pr49f\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.289184 5012 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.289202 5012 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.306472 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e45c788c-c8a0-4563-8d05-71915e390342" (UID: "e45c788c-c8a0-4563-8d05-71915e390342"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.390869 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.716639 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.717565 5012 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.735455 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.735498 5012 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4c519278-8830-47e6-a224-0edc04b31b98" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.735532 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.735546 5012 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4c519278-8830-47e6-a224-0edc04b31b98" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.982418 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.982604 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.983351 5012 scope.go:117] "RemoveContainer" containerID="b6f9cb760466cabd1e0a03b9e7b38403b65eda373e574649db72eb2355616bd8" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.988599 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.989121 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.989446 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.989959 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.990156 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.990562 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprhz" event={"ID":"e45c788c-c8a0-4563-8d05-71915e390342","Type":"ContainerDied","Data":"330c4277adf991cb8d45015f1cf3ae0cb9906f5605d279d3a2745e3670726677"} Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.011148 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.015297 5012 scope.go:117] "RemoveContainer" containerID="4d05b281db5317fbaf4180dd6656c44165f2aee89a9fa2e17cd24d4380132350" Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.050233 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rprhz"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.051061 5012 scope.go:117] "RemoveContainer" containerID="842ea38ab87f30dad259cf1979c7ff921a55d4d9e323ba2c8e89f149a1596602" Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.061948 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rprhz"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.074486 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwd8z"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.081395 5012 scope.go:117] "RemoveContainer" containerID="4b13a012dcea4fefc2b4e7757fddd764d86d7e0aa4fa7cfad77d502f2efa1ea0" Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.085510 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwd8z"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.094902 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-29nf4"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.098818 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-29nf4"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.101847 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4xvs8"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.106142 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4xvs8"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.110287 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xrjxk"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.111951 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xrjxk"] Feb 19 05:29:58 crc kubenswrapper[5012]: I0219 05:29:58.713179 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" path="/var/lib/kubelet/pods/185ea561-a45e-49e1-a46b-f9bf9f6d2527/volumes" Feb 19 05:29:58 crc kubenswrapper[5012]: I0219 05:29:58.714432 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="562c18aa-5aed-4f1e-95f5-da1fe7c02523" path="/var/lib/kubelet/pods/562c18aa-5aed-4f1e-95f5-da1fe7c02523/volumes" Feb 19 05:29:58 crc kubenswrapper[5012]: I0219 05:29:58.714877 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b9a1165-24e0-4062-b805-0f8262822507" path="/var/lib/kubelet/pods/7b9a1165-24e0-4062-b805-0f8262822507/volumes" Feb 19 05:29:58 crc kubenswrapper[5012]: I0219 05:29:58.715915 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" path="/var/lib/kubelet/pods/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9/volumes" Feb 19 05:29:58 crc kubenswrapper[5012]: I0219 05:29:58.716491 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e45c788c-c8a0-4563-8d05-71915e390342" path="/var/lib/kubelet/pods/e45c788c-c8a0-4563-8d05-71915e390342/volumes" Feb 19 05:29:59 crc kubenswrapper[5012]: I0219 05:29:59.074512 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 05:30:00 crc kubenswrapper[5012]: I0219 05:30:00.144091 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 05:30:01 crc kubenswrapper[5012]: I0219 05:30:01.855208 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 05:30:03 crc kubenswrapper[5012]: I0219 05:30:03.064969 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 05:30:04 crc kubenswrapper[5012]: I0219 05:30:04.482377 5012 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 19 05:30:04 crc kubenswrapper[5012]: I0219 05:30:04.565439 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 05:30:05 crc kubenswrapper[5012]: I0219 05:30:05.998027 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 05:30:11 crc kubenswrapper[5012]: I0219 05:30:11.097428 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 19 05:30:11 crc kubenswrapper[5012]: I0219 05:30:11.100549 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 05:30:11 crc kubenswrapper[5012]: I0219 05:30:11.100603 5012 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7d416dde1b0d46276be91907a124098c4e88b5ed6b05a4907bd5048f78aeba0e" exitCode=137 Feb 19 05:30:11 crc kubenswrapper[5012]: I0219 05:30:11.100639 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7d416dde1b0d46276be91907a124098c4e88b5ed6b05a4907bd5048f78aeba0e"} Feb 19 05:30:11 crc kubenswrapper[5012]: I0219 05:30:11.100679 5012 scope.go:117] "RemoveContainer" containerID="e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583" Feb 19 05:30:12 crc kubenswrapper[5012]: I0219 05:30:12.109931 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 19 05:30:12 crc kubenswrapper[5012]: I0219 05:30:12.112663 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"69d50f3c484f71457aead6e9fa94aac8b61c7cd6a7bc711668b6fc0f9ca49157"} Feb 19 05:30:13 crc kubenswrapper[5012]: I0219 05:30:13.637430 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 05:30:14 crc kubenswrapper[5012]: I0219 05:30:14.295174 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 05:30:17 crc kubenswrapper[5012]: I0219 05:30:17.266185 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:30:18 crc kubenswrapper[5012]: I0219 05:30:18.626861 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 05:30:20 crc kubenswrapper[5012]: I0219 05:30:20.611759 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:30:20 crc kubenswrapper[5012]: I0219 05:30:20.617491 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:30:21 crc kubenswrapper[5012]: I0219 05:30:21.170973 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:30:21 crc kubenswrapper[5012]: I0219 05:30:21.304853 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.399833 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jqjls"] Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400420 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerName="extract-utilities" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400432 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerName="extract-utilities" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400441 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400449 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400456 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="extract-utilities" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400462 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="extract-utilities" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400471 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400477 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400486 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerName="extract-content" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400492 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerName="extract-content" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400500 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="extract-content" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400505 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="extract-content" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400513 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="562c18aa-5aed-4f1e-95f5-da1fe7c02523" containerName="marketplace-operator" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400519 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="562c18aa-5aed-4f1e-95f5-da1fe7c02523" containerName="marketplace-operator" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400528 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9a1165-24e0-4062-b805-0f8262822507" containerName="extract-utilities" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400533 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9a1165-24e0-4062-b805-0f8262822507" containerName="extract-utilities" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400539 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9a1165-24e0-4062-b805-0f8262822507" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400545 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9a1165-24e0-4062-b805-0f8262822507" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400551 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400557 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400563 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" containerName="installer" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400569 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" containerName="installer" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400579 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9a1165-24e0-4062-b805-0f8262822507" containerName="extract-content" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400585 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9a1165-24e0-4062-b805-0f8262822507" containerName="extract-content" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400592 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerName="extract-content" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400598 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerName="extract-content" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400604 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400609 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400616 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerName="extract-utilities" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400621 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerName="extract-utilities" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400698 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400707 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b9a1165-24e0-4062-b805-0f8262822507" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400716 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400723 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" containerName="installer" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400730 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="562c18aa-5aed-4f1e-95f5-da1fe7c02523" containerName="marketplace-operator" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400737 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400746 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.401043 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.405846 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.406162 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.406282 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.406403 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.411494 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.474809 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jqjls"] Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.528122 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r"] Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.529990 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.532572 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.532578 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.537844 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/800f8349-6ef3-44ae-90a0-56c89ca82479-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.537930 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/800f8349-6ef3-44ae-90a0-56c89ca82479-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.537979 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swxjt\" (UniqueName: \"kubernetes.io/projected/800f8349-6ef3-44ae-90a0-56c89ca82479-kube-api-access-swxjt\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.544268 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r"] Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.639291 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swxjt\" (UniqueName: \"kubernetes.io/projected/800f8349-6ef3-44ae-90a0-56c89ca82479-kube-api-access-swxjt\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.639395 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62m9n\" (UniqueName: \"kubernetes.io/projected/ff63f713-7649-46d8-85cb-ef67dccf9fe6-kube-api-access-62m9n\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.639445 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff63f713-7649-46d8-85cb-ef67dccf9fe6-config-volume\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.639474 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/800f8349-6ef3-44ae-90a0-56c89ca82479-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.639522 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff63f713-7649-46d8-85cb-ef67dccf9fe6-secret-volume\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.639545 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/800f8349-6ef3-44ae-90a0-56c89ca82479-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.640783 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/800f8349-6ef3-44ae-90a0-56c89ca82479-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.645515 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/800f8349-6ef3-44ae-90a0-56c89ca82479-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.657904 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swxjt\" (UniqueName: \"kubernetes.io/projected/800f8349-6ef3-44ae-90a0-56c89ca82479-kube-api-access-swxjt\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.740478 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff63f713-7649-46d8-85cb-ef67dccf9fe6-secret-volume\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.741192 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62m9n\" (UniqueName: \"kubernetes.io/projected/ff63f713-7649-46d8-85cb-ef67dccf9fe6-kube-api-access-62m9n\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.741234 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff63f713-7649-46d8-85cb-ef67dccf9fe6-config-volume\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.742430 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff63f713-7649-46d8-85cb-ef67dccf9fe6-config-volume\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.744066 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff63f713-7649-46d8-85cb-ef67dccf9fe6-secret-volume\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.750012 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.765394 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62m9n\" (UniqueName: \"kubernetes.io/projected/ff63f713-7649-46d8-85cb-ef67dccf9fe6-kube-api-access-62m9n\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.851916 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.942632 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jqjls"] Feb 19 05:30:33 crc kubenswrapper[5012]: W0219 05:30:33.947207 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod800f8349_6ef3_44ae_90a0_56c89ca82479.slice/crio-0da434541d678745e494583af4b770ef7b1441fe49184eb098ec0a275ca69bde WatchSource:0}: Error finding container 0da434541d678745e494583af4b770ef7b1441fe49184eb098ec0a275ca69bde: Status 404 returned error can't find the container with id 0da434541d678745e494583af4b770ef7b1441fe49184eb098ec0a275ca69bde Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.046797 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r"] Feb 19 05:30:34 crc kubenswrapper[5012]: W0219 05:30:34.051057 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff63f713_7649_46d8_85cb_ef67dccf9fe6.slice/crio-b0d0f7e8c8c311cb6ac2e3bbcd209640b91cc0e24038d7d2c06b81dcc3952cd4 WatchSource:0}: Error finding container b0d0f7e8c8c311cb6ac2e3bbcd209640b91cc0e24038d7d2c06b81dcc3952cd4: Status 404 returned error can't find the container with id b0d0f7e8c8c311cb6ac2e3bbcd209640b91cc0e24038d7d2c06b81dcc3952cd4 Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.249948 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" event={"ID":"ff63f713-7649-46d8-85cb-ef67dccf9fe6","Type":"ContainerStarted","Data":"c5d7329af46ea59d345e496a5c84f8c51fab010adcb4a91e0080f58a2ca4a9ec"} Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.250284 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" event={"ID":"ff63f713-7649-46d8-85cb-ef67dccf9fe6","Type":"ContainerStarted","Data":"b0d0f7e8c8c311cb6ac2e3bbcd209640b91cc0e24038d7d2c06b81dcc3952cd4"} Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.251815 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" event={"ID":"800f8349-6ef3-44ae-90a0-56c89ca82479","Type":"ContainerStarted","Data":"bac282f789775200a115771164055f59f8edf0cda080bafae25efbbc31423525"} Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.251842 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" event={"ID":"800f8349-6ef3-44ae-90a0-56c89ca82479","Type":"ContainerStarted","Data":"0da434541d678745e494583af4b770ef7b1441fe49184eb098ec0a275ca69bde"} Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.252418 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.253681 5012 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jqjls container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.58:8080/healthz\": dial tcp 10.217.0.58:8080: connect: connection refused" start-of-body= Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.253731 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" podUID="800f8349-6ef3-44ae-90a0-56c89ca82479" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.58:8080/healthz\": dial tcp 10.217.0.58:8080: connect: connection refused" Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.276773 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" podStartSLOduration=1.2767539829999999 podStartE2EDuration="1.276753983s" podCreationTimestamp="2026-02-19 05:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:30:34.276489416 +0000 UTC m=+330.309811985" watchObservedRunningTime="2026-02-19 05:30:34.276753983 +0000 UTC m=+330.310076552" Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.276867 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" podStartSLOduration=1.276863796 podStartE2EDuration="1.276863796s" podCreationTimestamp="2026-02-19 05:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:30:34.262138172 +0000 UTC m=+330.295460751" watchObservedRunningTime="2026-02-19 05:30:34.276863796 +0000 UTC m=+330.310186365" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.007928 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ntrlp"] Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.008175 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" podUID="7e9dd710-d0ec-443f-a081-b18c4b6abe36" containerName="controller-manager" containerID="cri-o://d41d8bd2ca6cc54e0495b26c42ee87c5303f40e928d5ca5c25add9b16457d3a2" gracePeriod=30 Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.101610 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2"] Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.101838 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" podUID="89f1d0f3-c220-4668-b822-3b20b64ebfb8" containerName="route-controller-manager" containerID="cri-o://0ee8e83714534126962abe0549581114f5bc02b2fbc1bd415c2917a0b2e51cc4" gracePeriod=30 Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.179638 5012 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ntrlp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.179706 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" podUID="7e9dd710-d0ec-443f-a081-b18c4b6abe36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.256466 5012 generic.go:334] "Generic (PLEG): container finished" podID="ff63f713-7649-46d8-85cb-ef67dccf9fe6" containerID="c5d7329af46ea59d345e496a5c84f8c51fab010adcb4a91e0080f58a2ca4a9ec" exitCode=0 Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.256509 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" event={"ID":"ff63f713-7649-46d8-85cb-ef67dccf9fe6","Type":"ContainerDied","Data":"c5d7329af46ea59d345e496a5c84f8c51fab010adcb4a91e0080f58a2ca4a9ec"} Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.257868 5012 generic.go:334] "Generic (PLEG): container finished" podID="7e9dd710-d0ec-443f-a081-b18c4b6abe36" containerID="d41d8bd2ca6cc54e0495b26c42ee87c5303f40e928d5ca5c25add9b16457d3a2" exitCode=0 Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.257931 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" event={"ID":"7e9dd710-d0ec-443f-a081-b18c4b6abe36","Type":"ContainerDied","Data":"d41d8bd2ca6cc54e0495b26c42ee87c5303f40e928d5ca5c25add9b16457d3a2"} Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.259723 5012 generic.go:334] "Generic (PLEG): container finished" podID="89f1d0f3-c220-4668-b822-3b20b64ebfb8" containerID="0ee8e83714534126962abe0549581114f5bc02b2fbc1bd415c2917a0b2e51cc4" exitCode=0 Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.259799 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" event={"ID":"89f1d0f3-c220-4668-b822-3b20b64ebfb8","Type":"ContainerDied","Data":"0ee8e83714534126962abe0549581114f5bc02b2fbc1bd415c2917a0b2e51cc4"} Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.288041 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.288160 5012 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mn4f2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.288194 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" podUID="89f1d0f3-c220-4668-b822-3b20b64ebfb8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.462976 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.499142 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.561889 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-proxy-ca-bundles\") pod \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.561941 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e9dd710-d0ec-443f-a081-b18c4b6abe36-serving-cert\") pod \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.561986 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5vpz\" (UniqueName: \"kubernetes.io/projected/7e9dd710-d0ec-443f-a081-b18c4b6abe36-kube-api-access-q5vpz\") pod \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.562013 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-config\") pod \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.562092 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-client-ca\") pod \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.562921 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-client-ca" (OuterVolumeSpecName: "client-ca") pod "7e9dd710-d0ec-443f-a081-b18c4b6abe36" (UID: "7e9dd710-d0ec-443f-a081-b18c4b6abe36"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.563170 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7e9dd710-d0ec-443f-a081-b18c4b6abe36" (UID: "7e9dd710-d0ec-443f-a081-b18c4b6abe36"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.564411 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-config" (OuterVolumeSpecName: "config") pod "7e9dd710-d0ec-443f-a081-b18c4b6abe36" (UID: "7e9dd710-d0ec-443f-a081-b18c4b6abe36"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.568517 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e9dd710-d0ec-443f-a081-b18c4b6abe36-kube-api-access-q5vpz" (OuterVolumeSpecName: "kube-api-access-q5vpz") pod "7e9dd710-d0ec-443f-a081-b18c4b6abe36" (UID: "7e9dd710-d0ec-443f-a081-b18c4b6abe36"). InnerVolumeSpecName "kube-api-access-q5vpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.569472 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e9dd710-d0ec-443f-a081-b18c4b6abe36-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7e9dd710-d0ec-443f-a081-b18c4b6abe36" (UID: "7e9dd710-d0ec-443f-a081-b18c4b6abe36"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663236 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89f1d0f3-c220-4668-b822-3b20b64ebfb8-serving-cert\") pod \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663353 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgg97\" (UniqueName: \"kubernetes.io/projected/89f1d0f3-c220-4668-b822-3b20b64ebfb8-kube-api-access-fgg97\") pod \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663397 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-client-ca\") pod \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663424 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-config\") pod \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663622 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5vpz\" (UniqueName: \"kubernetes.io/projected/7e9dd710-d0ec-443f-a081-b18c4b6abe36-kube-api-access-q5vpz\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663634 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663642 5012 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663650 5012 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663658 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e9dd710-d0ec-443f-a081-b18c4b6abe36-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.664712 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-client-ca" (OuterVolumeSpecName: "client-ca") pod "89f1d0f3-c220-4668-b822-3b20b64ebfb8" (UID: "89f1d0f3-c220-4668-b822-3b20b64ebfb8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.664834 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-config" (OuterVolumeSpecName: "config") pod "89f1d0f3-c220-4668-b822-3b20b64ebfb8" (UID: "89f1d0f3-c220-4668-b822-3b20b64ebfb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.667417 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f1d0f3-c220-4668-b822-3b20b64ebfb8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "89f1d0f3-c220-4668-b822-3b20b64ebfb8" (UID: "89f1d0f3-c220-4668-b822-3b20b64ebfb8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.668176 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f1d0f3-c220-4668-b822-3b20b64ebfb8-kube-api-access-fgg97" (OuterVolumeSpecName: "kube-api-access-fgg97") pod "89f1d0f3-c220-4668-b822-3b20b64ebfb8" (UID: "89f1d0f3-c220-4668-b822-3b20b64ebfb8"). InnerVolumeSpecName "kube-api-access-fgg97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.764833 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgg97\" (UniqueName: \"kubernetes.io/projected/89f1d0f3-c220-4668-b822-3b20b64ebfb8-kube-api-access-fgg97\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.764875 5012 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.764888 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.764898 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89f1d0f3-c220-4668-b822-3b20b64ebfb8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.268479 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" event={"ID":"89f1d0f3-c220-4668-b822-3b20b64ebfb8","Type":"ContainerDied","Data":"5003562696efaf86d8b690a85cdcf58c161a34b94a16cc2ce64a20964ec94127"} Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.268560 5012 scope.go:117] "RemoveContainer" containerID="0ee8e83714534126962abe0549581114f5bc02b2fbc1bd415c2917a0b2e51cc4" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.268505 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.276245 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.278390 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" event={"ID":"7e9dd710-d0ec-443f-a081-b18c4b6abe36","Type":"ContainerDied","Data":"1ee3dd9b34ee54e0754750a439b4590af9a0a688e92512f756cbea34daf382ca"} Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.298606 5012 scope.go:117] "RemoveContainer" containerID="d41d8bd2ca6cc54e0495b26c42ee87c5303f40e928d5ca5c25add9b16457d3a2" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.307543 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2"] Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.325514 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2"] Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.336581 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ntrlp"] Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.349075 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ntrlp"] Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.423273 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-678789d75d-n5n5h"] Feb 19 05:30:36 crc kubenswrapper[5012]: E0219 05:30:36.423702 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9dd710-d0ec-443f-a081-b18c4b6abe36" containerName="controller-manager" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.423775 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9dd710-d0ec-443f-a081-b18c4b6abe36" containerName="controller-manager" Feb 19 05:30:36 crc kubenswrapper[5012]: E0219 05:30:36.423794 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f1d0f3-c220-4668-b822-3b20b64ebfb8" containerName="route-controller-manager" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.423801 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f1d0f3-c220-4668-b822-3b20b64ebfb8" containerName="route-controller-manager" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.424017 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e9dd710-d0ec-443f-a081-b18c4b6abe36" containerName="controller-manager" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.424034 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f1d0f3-c220-4668-b822-3b20b64ebfb8" containerName="route-controller-manager" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.424731 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.429890 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.430070 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw"] Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.430358 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.430542 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.430668 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.430816 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.431332 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.434448 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.436695 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.437287 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.437779 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.437870 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.438343 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.438360 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.442879 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.450658 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw"] Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.463296 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-678789d75d-n5n5h"] Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.613411 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623139 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd25j\" (UniqueName: \"kubernetes.io/projected/018f3b6e-7828-44c1-923e-f438710195ca-kube-api-access-gd25j\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623207 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-config\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623236 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-client-ca\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623269 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-proxy-ca-bundles\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623297 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-config\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623349 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k845n\" (UniqueName: \"kubernetes.io/projected/5e26810d-df6b-4534-bdab-c3d121e79479-kube-api-access-k845n\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623464 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e26810d-df6b-4534-bdab-c3d121e79479-serving-cert\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623515 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-client-ca\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623765 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/018f3b6e-7828-44c1-923e-f438710195ca-serving-cert\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.710068 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e9dd710-d0ec-443f-a081-b18c4b6abe36" path="/var/lib/kubelet/pods/7e9dd710-d0ec-443f-a081-b18c4b6abe36/volumes" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.710579 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89f1d0f3-c220-4668-b822-3b20b64ebfb8" path="/var/lib/kubelet/pods/89f1d0f3-c220-4668-b822-3b20b64ebfb8/volumes" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.715494 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-678789d75d-n5n5h"] Feb 19 05:30:36 crc kubenswrapper[5012]: E0219 05:30:36.715822 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-k845n proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" podUID="5e26810d-df6b-4534-bdab-c3d121e79479" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.724619 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff63f713-7649-46d8-85cb-ef67dccf9fe6-secret-volume\") pod \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.724695 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff63f713-7649-46d8-85cb-ef67dccf9fe6-config-volume\") pod \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.724759 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62m9n\" (UniqueName: \"kubernetes.io/projected/ff63f713-7649-46d8-85cb-ef67dccf9fe6-kube-api-access-62m9n\") pod \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.724879 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k845n\" (UniqueName: \"kubernetes.io/projected/5e26810d-df6b-4534-bdab-c3d121e79479-kube-api-access-k845n\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.724905 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e26810d-df6b-4534-bdab-c3d121e79479-serving-cert\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.724926 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-client-ca\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.724954 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/018f3b6e-7828-44c1-923e-f438710195ca-serving-cert\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.724997 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd25j\" (UniqueName: \"kubernetes.io/projected/018f3b6e-7828-44c1-923e-f438710195ca-kube-api-access-gd25j\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.725020 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-config\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.725042 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-client-ca\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.725061 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-proxy-ca-bundles\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.725079 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-config\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.725795 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff63f713-7649-46d8-85cb-ef67dccf9fe6-config-volume" (OuterVolumeSpecName: "config-volume") pod "ff63f713-7649-46d8-85cb-ef67dccf9fe6" (UID: "ff63f713-7649-46d8-85cb-ef67dccf9fe6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.726103 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-config\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.726653 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-client-ca\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.727228 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-config\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.727331 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-proxy-ca-bundles\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.728847 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-client-ca\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.735592 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff63f713-7649-46d8-85cb-ef67dccf9fe6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ff63f713-7649-46d8-85cb-ef67dccf9fe6" (UID: "ff63f713-7649-46d8-85cb-ef67dccf9fe6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.735620 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff63f713-7649-46d8-85cb-ef67dccf9fe6-kube-api-access-62m9n" (OuterVolumeSpecName: "kube-api-access-62m9n") pod "ff63f713-7649-46d8-85cb-ef67dccf9fe6" (UID: "ff63f713-7649-46d8-85cb-ef67dccf9fe6"). InnerVolumeSpecName "kube-api-access-62m9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.736233 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e26810d-df6b-4534-bdab-c3d121e79479-serving-cert\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.739890 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw"] Feb 19 05:30:36 crc kubenswrapper[5012]: E0219 05:30:36.740221 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-gd25j serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" podUID="018f3b6e-7828-44c1-923e-f438710195ca" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.747884 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k845n\" (UniqueName: \"kubernetes.io/projected/5e26810d-df6b-4534-bdab-c3d121e79479-kube-api-access-k845n\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.751895 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd25j\" (UniqueName: \"kubernetes.io/projected/018f3b6e-7828-44c1-923e-f438710195ca-kube-api-access-gd25j\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.756603 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/018f3b6e-7828-44c1-923e-f438710195ca-serving-cert\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.826240 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff63f713-7649-46d8-85cb-ef67dccf9fe6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.827002 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62m9n\" (UniqueName: \"kubernetes.io/projected/ff63f713-7649-46d8-85cb-ef67dccf9fe6-kube-api-access-62m9n\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.827033 5012 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff63f713-7649-46d8-85cb-ef67dccf9fe6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.296963 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.297597 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.299899 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.300012 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" event={"ID":"ff63f713-7649-46d8-85cb-ef67dccf9fe6","Type":"ContainerDied","Data":"b0d0f7e8c8c311cb6ac2e3bbcd209640b91cc0e24038d7d2c06b81dcc3952cd4"} Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.300061 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0d0f7e8c8c311cb6ac2e3bbcd209640b91cc0e24038d7d2c06b81dcc3952cd4" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.313540 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.325254 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.336475 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd25j\" (UniqueName: \"kubernetes.io/projected/018f3b6e-7828-44c1-923e-f438710195ca-kube-api-access-gd25j\") pod \"018f3b6e-7828-44c1-923e-f438710195ca\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.337295 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-client-ca" (OuterVolumeSpecName: "client-ca") pod "5e26810d-df6b-4534-bdab-c3d121e79479" (UID: "5e26810d-df6b-4534-bdab-c3d121e79479"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.338780 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-client-ca\") pod \"5e26810d-df6b-4534-bdab-c3d121e79479\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.338918 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-client-ca\") pod \"018f3b6e-7828-44c1-923e-f438710195ca\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.338965 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-config\") pod \"018f3b6e-7828-44c1-923e-f438710195ca\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.339490 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "018f3b6e-7828-44c1-923e-f438710195ca" (UID: "018f3b6e-7828-44c1-923e-f438710195ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.339697 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-config" (OuterVolumeSpecName: "config") pod "018f3b6e-7828-44c1-923e-f438710195ca" (UID: "018f3b6e-7828-44c1-923e-f438710195ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.339795 5012 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.339813 5012 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.339825 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.344530 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018f3b6e-7828-44c1-923e-f438710195ca-kube-api-access-gd25j" (OuterVolumeSpecName: "kube-api-access-gd25j") pod "018f3b6e-7828-44c1-923e-f438710195ca" (UID: "018f3b6e-7828-44c1-923e-f438710195ca"). InnerVolumeSpecName "kube-api-access-gd25j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.442739 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k845n\" (UniqueName: \"kubernetes.io/projected/5e26810d-df6b-4534-bdab-c3d121e79479-kube-api-access-k845n\") pod \"5e26810d-df6b-4534-bdab-c3d121e79479\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.442843 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/018f3b6e-7828-44c1-923e-f438710195ca-serving-cert\") pod \"018f3b6e-7828-44c1-923e-f438710195ca\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.442917 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e26810d-df6b-4534-bdab-c3d121e79479-serving-cert\") pod \"5e26810d-df6b-4534-bdab-c3d121e79479\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.442969 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-proxy-ca-bundles\") pod \"5e26810d-df6b-4534-bdab-c3d121e79479\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.443007 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-config\") pod \"5e26810d-df6b-4534-bdab-c3d121e79479\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.443336 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd25j\" (UniqueName: \"kubernetes.io/projected/018f3b6e-7828-44c1-923e-f438710195ca-kube-api-access-gd25j\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.444436 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5e26810d-df6b-4534-bdab-c3d121e79479" (UID: "5e26810d-df6b-4534-bdab-c3d121e79479"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.445074 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-config" (OuterVolumeSpecName: "config") pod "5e26810d-df6b-4534-bdab-c3d121e79479" (UID: "5e26810d-df6b-4534-bdab-c3d121e79479"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.448425 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018f3b6e-7828-44c1-923e-f438710195ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "018f3b6e-7828-44c1-923e-f438710195ca" (UID: "018f3b6e-7828-44c1-923e-f438710195ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.449634 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e26810d-df6b-4534-bdab-c3d121e79479-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5e26810d-df6b-4534-bdab-c3d121e79479" (UID: "5e26810d-df6b-4534-bdab-c3d121e79479"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.450617 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e26810d-df6b-4534-bdab-c3d121e79479-kube-api-access-k845n" (OuterVolumeSpecName: "kube-api-access-k845n") pod "5e26810d-df6b-4534-bdab-c3d121e79479" (UID: "5e26810d-df6b-4534-bdab-c3d121e79479"). InnerVolumeSpecName "kube-api-access-k845n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.545419 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k845n\" (UniqueName: \"kubernetes.io/projected/5e26810d-df6b-4534-bdab-c3d121e79479-kube-api-access-k845n\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.545471 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/018f3b6e-7828-44c1-923e-f438710195ca-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.545488 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e26810d-df6b-4534-bdab-c3d121e79479-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.545500 5012 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.545516 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.301782 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.301829 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.349693 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-678789d75d-n5n5h"] Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.355672 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-2hw62"] Feb 19 05:30:38 crc kubenswrapper[5012]: E0219 05:30:38.356251 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff63f713-7649-46d8-85cb-ef67dccf9fe6" containerName="collect-profiles" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.356278 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff63f713-7649-46d8-85cb-ef67dccf9fe6" containerName="collect-profiles" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.356665 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff63f713-7649-46d8-85cb-ef67dccf9fe6" containerName="collect-profiles" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.357619 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.360890 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.363796 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.364198 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.365479 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.365602 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.365710 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.384560 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.388144 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-678789d75d-n5n5h"] Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.395274 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-2hw62"] Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.404535 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw"] Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.408973 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw"] Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.456967 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z72ss\" (UniqueName: \"kubernetes.io/projected/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-kube-api-access-z72ss\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.457103 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-proxy-ca-bundles\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.457145 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-client-ca\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.457173 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-serving-cert\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.457432 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-config\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.558572 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-proxy-ca-bundles\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.558622 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-client-ca\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.558646 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-serving-cert\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.558667 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-config\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.558689 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z72ss\" (UniqueName: \"kubernetes.io/projected/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-kube-api-access-z72ss\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.559575 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-proxy-ca-bundles\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.559936 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-config\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.560268 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-client-ca\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.586433 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z72ss\" (UniqueName: \"kubernetes.io/projected/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-kube-api-access-z72ss\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.600540 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-serving-cert\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.684081 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.717450 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="018f3b6e-7828-44c1-923e-f438710195ca" path="/var/lib/kubelet/pods/018f3b6e-7828-44c1-923e-f438710195ca/volumes" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.718166 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e26810d-df6b-4534-bdab-c3d121e79479" path="/var/lib/kubelet/pods/5e26810d-df6b-4534-bdab-c3d121e79479/volumes" Feb 19 05:30:39 crc kubenswrapper[5012]: I0219 05:30:39.182497 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-2hw62"] Feb 19 05:30:39 crc kubenswrapper[5012]: I0219 05:30:39.310349 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" event={"ID":"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065","Type":"ContainerStarted","Data":"241f515f82f6fd136ce265688cc81997f56dc2b17fde97525591ae1b17f15e90"} Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.317171 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" event={"ID":"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065","Type":"ContainerStarted","Data":"c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028"} Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.317696 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.323294 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.341964 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" podStartSLOduration=4.341948837 podStartE2EDuration="4.341948837s" podCreationTimestamp="2026-02-19 05:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:30:40.341431893 +0000 UTC m=+336.374754482" watchObservedRunningTime="2026-02-19 05:30:40.341948837 +0000 UTC m=+336.375271416" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.417913 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4"] Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.418469 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.420565 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.421679 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.421720 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.421814 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.422101 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.422211 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.437113 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4"] Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.506368 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-config\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.506465 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-client-ca\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.506495 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-serving-cert\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.506528 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qr8w\" (UniqueName: \"kubernetes.io/projected/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-kube-api-access-4qr8w\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.607991 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-client-ca\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.608046 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-serving-cert\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.608087 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qr8w\" (UniqueName: \"kubernetes.io/projected/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-kube-api-access-4qr8w\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.608135 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-config\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.609203 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-client-ca\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.609325 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-config\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.623846 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-serving-cert\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.629245 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qr8w\" (UniqueName: \"kubernetes.io/projected/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-kube-api-access-4qr8w\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.748549 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.977247 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4"] Feb 19 05:30:41 crc kubenswrapper[5012]: I0219 05:30:41.327765 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" event={"ID":"71ef05be-3ff3-4a9f-b039-19c1840d1e2b","Type":"ContainerStarted","Data":"c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956"} Feb 19 05:30:41 crc kubenswrapper[5012]: I0219 05:30:41.328255 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" event={"ID":"71ef05be-3ff3-4a9f-b039-19c1840d1e2b","Type":"ContainerStarted","Data":"dfd61f82ed50896c324968c9e67ea833599f37a8a37e450778101c6d08037e66"} Feb 19 05:30:41 crc kubenswrapper[5012]: I0219 05:30:41.329812 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:41 crc kubenswrapper[5012]: I0219 05:30:41.354009 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" podStartSLOduration=5.353991761 podStartE2EDuration="5.353991761s" podCreationTimestamp="2026-02-19 05:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:30:41.350431533 +0000 UTC m=+337.383754172" watchObservedRunningTime="2026-02-19 05:30:41.353991761 +0000 UTC m=+337.387314340" Feb 19 05:30:41 crc kubenswrapper[5012]: I0219 05:30:41.519465 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:44 crc kubenswrapper[5012]: I0219 05:30:44.431245 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:30:44 crc kubenswrapper[5012]: I0219 05:30:44.431694 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.575286 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cxb7f"] Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.577270 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.580182 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.598765 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cxb7f"] Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.633559 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cecb9fea-b109-4267-918f-765d774f76de-utilities\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.633654 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xnhc\" (UniqueName: \"kubernetes.io/projected/cecb9fea-b109-4267-918f-765d774f76de-kube-api-access-9xnhc\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.633731 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cecb9fea-b109-4267-918f-765d774f76de-catalog-content\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.735101 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cecb9fea-b109-4267-918f-765d774f76de-utilities\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.735145 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xnhc\" (UniqueName: \"kubernetes.io/projected/cecb9fea-b109-4267-918f-765d774f76de-kube-api-access-9xnhc\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.735173 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cecb9fea-b109-4267-918f-765d774f76de-catalog-content\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.735882 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cecb9fea-b109-4267-918f-765d774f76de-catalog-content\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.736013 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cecb9fea-b109-4267-918f-765d774f76de-utilities\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.760558 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m458l"] Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.762535 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.767076 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.776095 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xnhc\" (UniqueName: \"kubernetes.io/projected/cecb9fea-b109-4267-918f-765d774f76de-kube-api-access-9xnhc\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.779417 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m458l"] Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.836391 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c19ca5-841c-4d69-b2ca-a7649d14492f-utilities\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.836446 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8qlb\" (UniqueName: \"kubernetes.io/projected/81c19ca5-841c-4d69-b2ca-a7649d14492f-kube-api-access-q8qlb\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.836475 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c19ca5-841c-4d69-b2ca-a7649d14492f-catalog-content\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.896944 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.938180 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c19ca5-841c-4d69-b2ca-a7649d14492f-utilities\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.938258 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8qlb\" (UniqueName: \"kubernetes.io/projected/81c19ca5-841c-4d69-b2ca-a7649d14492f-kube-api-access-q8qlb\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.938293 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c19ca5-841c-4d69-b2ca-a7649d14492f-catalog-content\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.939097 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c19ca5-841c-4d69-b2ca-a7649d14492f-utilities\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.939261 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c19ca5-841c-4d69-b2ca-a7649d14492f-catalog-content\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.971005 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8qlb\" (UniqueName: \"kubernetes.io/projected/81c19ca5-841c-4d69-b2ca-a7649d14492f-kube-api-access-q8qlb\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:04 crc kubenswrapper[5012]: I0219 05:31:04.113340 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:04 crc kubenswrapper[5012]: I0219 05:31:04.391090 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cxb7f"] Feb 19 05:31:04 crc kubenswrapper[5012]: W0219 05:31:04.401249 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcecb9fea_b109_4267_918f_765d774f76de.slice/crio-0c2d27083f289e84547829ffafd41fc7390a2dfe1bbf4e0eef99f84fbb54839d WatchSource:0}: Error finding container 0c2d27083f289e84547829ffafd41fc7390a2dfe1bbf4e0eef99f84fbb54839d: Status 404 returned error can't find the container with id 0c2d27083f289e84547829ffafd41fc7390a2dfe1bbf4e0eef99f84fbb54839d Feb 19 05:31:04 crc kubenswrapper[5012]: I0219 05:31:04.477411 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxb7f" event={"ID":"cecb9fea-b109-4267-918f-765d774f76de","Type":"ContainerStarted","Data":"0c2d27083f289e84547829ffafd41fc7390a2dfe1bbf4e0eef99f84fbb54839d"} Feb 19 05:31:04 crc kubenswrapper[5012]: I0219 05:31:04.624511 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m458l"] Feb 19 05:31:04 crc kubenswrapper[5012]: W0219 05:31:04.642105 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81c19ca5_841c_4d69_b2ca_a7649d14492f.slice/crio-a4a2ee58a1ee0443a3bb2126af73b4500eab5e427c6c1f2da19a17691450ecab WatchSource:0}: Error finding container a4a2ee58a1ee0443a3bb2126af73b4500eab5e427c6c1f2da19a17691450ecab: Status 404 returned error can't find the container with id a4a2ee58a1ee0443a3bb2126af73b4500eab5e427c6c1f2da19a17691450ecab Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.366383 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zflwk"] Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.370038 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.373749 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.384234 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zflwk"] Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.465337 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b555779a-946d-4ad9-93a6-2b0673f81cfa-utilities\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.465397 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9948j\" (UniqueName: \"kubernetes.io/projected/b555779a-946d-4ad9-93a6-2b0673f81cfa-kube-api-access-9948j\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.465496 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b555779a-946d-4ad9-93a6-2b0673f81cfa-catalog-content\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.484487 5012 generic.go:334] "Generic (PLEG): container finished" podID="81c19ca5-841c-4d69-b2ca-a7649d14492f" containerID="3bb96dc6ea9ca0e073c0895f30eb25f91844a36f786ef70b6971d9793f352f16" exitCode=0 Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.484562 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m458l" event={"ID":"81c19ca5-841c-4d69-b2ca-a7649d14492f","Type":"ContainerDied","Data":"3bb96dc6ea9ca0e073c0895f30eb25f91844a36f786ef70b6971d9793f352f16"} Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.484594 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m458l" event={"ID":"81c19ca5-841c-4d69-b2ca-a7649d14492f","Type":"ContainerStarted","Data":"a4a2ee58a1ee0443a3bb2126af73b4500eab5e427c6c1f2da19a17691450ecab"} Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.486737 5012 generic.go:334] "Generic (PLEG): container finished" podID="cecb9fea-b109-4267-918f-765d774f76de" containerID="2ec212b49789d746369dd0d46fb64e5f8a52d1f36073c17ad109017f565d5cc0" exitCode=0 Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.486765 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxb7f" event={"ID":"cecb9fea-b109-4267-918f-765d774f76de","Type":"ContainerDied","Data":"2ec212b49789d746369dd0d46fb64e5f8a52d1f36073c17ad109017f565d5cc0"} Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.567468 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b555779a-946d-4ad9-93a6-2b0673f81cfa-catalog-content\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.567724 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b555779a-946d-4ad9-93a6-2b0673f81cfa-utilities\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.567762 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9948j\" (UniqueName: \"kubernetes.io/projected/b555779a-946d-4ad9-93a6-2b0673f81cfa-kube-api-access-9948j\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.568295 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b555779a-946d-4ad9-93a6-2b0673f81cfa-utilities\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.568720 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b555779a-946d-4ad9-93a6-2b0673f81cfa-catalog-content\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.592291 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9948j\" (UniqueName: \"kubernetes.io/projected/b555779a-946d-4ad9-93a6-2b0673f81cfa-kube-api-access-9948j\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.735671 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.213619 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zflwk"] Feb 19 05:31:06 crc kubenswrapper[5012]: W0219 05:31:06.222222 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb555779a_946d_4ad9_93a6_2b0673f81cfa.slice/crio-9b4e5b05bdd827b02856a52a173b715b5cccdfa3c2d5af46010710f7d63bba3a WatchSource:0}: Error finding container 9b4e5b05bdd827b02856a52a173b715b5cccdfa3c2d5af46010710f7d63bba3a: Status 404 returned error can't find the container with id 9b4e5b05bdd827b02856a52a173b715b5cccdfa3c2d5af46010710f7d63bba3a Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.357211 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bj5sc"] Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.358434 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.360875 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.372714 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bj5sc"] Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.480378 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-catalog-content\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.480887 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqbpk\" (UniqueName: \"kubernetes.io/projected/b03ab861-19bb-4215-9b19-990a14b35367-kube-api-access-lqbpk\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.480914 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-utilities\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.505319 5012 generic.go:334] "Generic (PLEG): container finished" podID="b555779a-946d-4ad9-93a6-2b0673f81cfa" containerID="1f9856a85600d035cb8ae20af1e60c7ec749b8285e0eaf03555f8a2fcad90706" exitCode=0 Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.505416 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zflwk" event={"ID":"b555779a-946d-4ad9-93a6-2b0673f81cfa","Type":"ContainerDied","Data":"1f9856a85600d035cb8ae20af1e60c7ec749b8285e0eaf03555f8a2fcad90706"} Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.505453 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zflwk" event={"ID":"b555779a-946d-4ad9-93a6-2b0673f81cfa","Type":"ContainerStarted","Data":"9b4e5b05bdd827b02856a52a173b715b5cccdfa3c2d5af46010710f7d63bba3a"} Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.511092 5012 generic.go:334] "Generic (PLEG): container finished" podID="81c19ca5-841c-4d69-b2ca-a7649d14492f" containerID="6082ddf6d79e147f1c61622ab7264da1b8d5c390b814f06355a4ff2d1ac6b44a" exitCode=0 Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.511156 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m458l" event={"ID":"81c19ca5-841c-4d69-b2ca-a7649d14492f","Type":"ContainerDied","Data":"6082ddf6d79e147f1c61622ab7264da1b8d5c390b814f06355a4ff2d1ac6b44a"} Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:06.583071 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-catalog-content\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:06.583156 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqbpk\" (UniqueName: \"kubernetes.io/projected/b03ab861-19bb-4215-9b19-990a14b35367-kube-api-access-lqbpk\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:06.583180 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-utilities\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:06.583808 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-utilities\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:06.584273 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-catalog-content\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:06.609976 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqbpk\" (UniqueName: \"kubernetes.io/projected/b03ab861-19bb-4215-9b19-990a14b35367-kube-api-access-lqbpk\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:06.692505 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.000728 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-m5vgr"] Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.001964 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.029713 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-m5vgr"] Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.090195 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc4vq\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-kube-api-access-jc4vq\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.090227 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5e7ced67-3fa8-4660-951b-4189c7d078c1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.090250 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-bound-sa-token\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.090286 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.090332 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e7ced67-3fa8-4660-951b-4189c7d078c1-trusted-ca\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.090711 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5e7ced67-3fa8-4660-951b-4189c7d078c1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.090736 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-registry-tls\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.090753 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5e7ced67-3fa8-4660-951b-4189c7d078c1-registry-certificates\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.111166 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.192590 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5e7ced67-3fa8-4660-951b-4189c7d078c1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.192653 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-registry-tls\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.192716 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5e7ced67-3fa8-4660-951b-4189c7d078c1-registry-certificates\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.192768 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc4vq\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-kube-api-access-jc4vq\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.192792 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5e7ced67-3fa8-4660-951b-4189c7d078c1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.192816 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-bound-sa-token\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.192858 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e7ced67-3fa8-4660-951b-4189c7d078c1-trusted-ca\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.193263 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5e7ced67-3fa8-4660-951b-4189c7d078c1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.197269 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5e7ced67-3fa8-4660-951b-4189c7d078c1-registry-certificates\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.198570 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e7ced67-3fa8-4660-951b-4189c7d078c1-trusted-ca\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.210390 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5e7ced67-3fa8-4660-951b-4189c7d078c1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.210579 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-registry-tls\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.214038 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-bound-sa-token\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.215709 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc4vq\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-kube-api-access-jc4vq\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.275124 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bj5sc"] Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.334844 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.531989 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zflwk" event={"ID":"b555779a-946d-4ad9-93a6-2b0673f81cfa","Type":"ContainerStarted","Data":"18c04d3e0a556e08e145d8671024263e121d7285528693acc63a8584858c5547"} Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.548520 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m458l" event={"ID":"81c19ca5-841c-4d69-b2ca-a7649d14492f","Type":"ContainerStarted","Data":"acd1a68a3588e87125f965caeaa54907e323a070f3f5ea824ceb8312fcd4e767"} Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.558981 5012 generic.go:334] "Generic (PLEG): container finished" podID="cecb9fea-b109-4267-918f-765d774f76de" containerID="2ed7d09ee5975ad995e8e62683134569a8f178080301ed6dc1f2a8d6791a5bb2" exitCode=0 Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.559051 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxb7f" event={"ID":"cecb9fea-b109-4267-918f-765d774f76de","Type":"ContainerDied","Data":"2ed7d09ee5975ad995e8e62683134569a8f178080301ed6dc1f2a8d6791a5bb2"} Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.570423 5012 generic.go:334] "Generic (PLEG): container finished" podID="b03ab861-19bb-4215-9b19-990a14b35367" containerID="8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b" exitCode=0 Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.570478 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj5sc" event={"ID":"b03ab861-19bb-4215-9b19-990a14b35367","Type":"ContainerDied","Data":"8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b"} Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.570508 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj5sc" event={"ID":"b03ab861-19bb-4215-9b19-990a14b35367","Type":"ContainerStarted","Data":"b3f8ca73c66c4fd97d0f19be0a24c8b8a95a41c1d3401dfb594c1ddc1a916e29"} Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.576625 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m458l" podStartSLOduration=3.061825416 podStartE2EDuration="4.576609257s" podCreationTimestamp="2026-02-19 05:31:03 +0000 UTC" firstStartedPulling="2026-02-19 05:31:05.486745585 +0000 UTC m=+361.520068154" lastFinishedPulling="2026-02-19 05:31:07.001529416 +0000 UTC m=+363.034851995" observedRunningTime="2026-02-19 05:31:07.575328602 +0000 UTC m=+363.608651171" watchObservedRunningTime="2026-02-19 05:31:07.576609257 +0000 UTC m=+363.609931826" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.786455 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-m5vgr"] Feb 19 05:31:07 crc kubenswrapper[5012]: W0219 05:31:07.791035 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e7ced67_3fa8_4660_951b_4189c7d078c1.slice/crio-2abf1cfb5dc808ba6052f5f03399c0b57ba8d855ee77fe7089a16877e5137581 WatchSource:0}: Error finding container 2abf1cfb5dc808ba6052f5f03399c0b57ba8d855ee77fe7089a16877e5137581: Status 404 returned error can't find the container with id 2abf1cfb5dc808ba6052f5f03399c0b57ba8d855ee77fe7089a16877e5137581 Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.577888 5012 generic.go:334] "Generic (PLEG): container finished" podID="b555779a-946d-4ad9-93a6-2b0673f81cfa" containerID="18c04d3e0a556e08e145d8671024263e121d7285528693acc63a8584858c5547" exitCode=0 Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.577957 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zflwk" event={"ID":"b555779a-946d-4ad9-93a6-2b0673f81cfa","Type":"ContainerDied","Data":"18c04d3e0a556e08e145d8671024263e121d7285528693acc63a8584858c5547"} Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.579420 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" event={"ID":"5e7ced67-3fa8-4660-951b-4189c7d078c1","Type":"ContainerStarted","Data":"ca838889ea01da4806d579a4d22e99aa9471a255e8c9bcb3e5b14495f10abcc4"} Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.579457 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" event={"ID":"5e7ced67-3fa8-4660-951b-4189c7d078c1","Type":"ContainerStarted","Data":"2abf1cfb5dc808ba6052f5f03399c0b57ba8d855ee77fe7089a16877e5137581"} Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.579664 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.582005 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxb7f" event={"ID":"cecb9fea-b109-4267-918f-765d774f76de","Type":"ContainerStarted","Data":"da121bb8e64b250b4a3b9532132adb26d96746a0941a1029a305150fe510833e"} Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.584150 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj5sc" event={"ID":"b03ab861-19bb-4215-9b19-990a14b35367","Type":"ContainerStarted","Data":"8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f"} Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.644059 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" podStartSLOduration=2.644038991 podStartE2EDuration="2.644038991s" podCreationTimestamp="2026-02-19 05:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:31:08.641697097 +0000 UTC m=+364.675019656" watchObservedRunningTime="2026-02-19 05:31:08.644038991 +0000 UTC m=+364.677361570" Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.662349 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cxb7f" podStartSLOduration=3.201146057 podStartE2EDuration="5.662332553s" podCreationTimestamp="2026-02-19 05:31:03 +0000 UTC" firstStartedPulling="2026-02-19 05:31:05.488716579 +0000 UTC m=+361.522039158" lastFinishedPulling="2026-02-19 05:31:07.949903075 +0000 UTC m=+363.983225654" observedRunningTime="2026-02-19 05:31:08.658844217 +0000 UTC m=+364.692166796" watchObservedRunningTime="2026-02-19 05:31:08.662332553 +0000 UTC m=+364.695655112" Feb 19 05:31:09 crc kubenswrapper[5012]: I0219 05:31:09.592681 5012 generic.go:334] "Generic (PLEG): container finished" podID="b03ab861-19bb-4215-9b19-990a14b35367" containerID="8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f" exitCode=0 Feb 19 05:31:09 crc kubenswrapper[5012]: I0219 05:31:09.592804 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj5sc" event={"ID":"b03ab861-19bb-4215-9b19-990a14b35367","Type":"ContainerDied","Data":"8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f"} Feb 19 05:31:09 crc kubenswrapper[5012]: I0219 05:31:09.597501 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zflwk" event={"ID":"b555779a-946d-4ad9-93a6-2b0673f81cfa","Type":"ContainerStarted","Data":"3900d249d58e0777de23ce8421fd67853f9df359240e7b9335c423524b4c196b"} Feb 19 05:31:09 crc kubenswrapper[5012]: I0219 05:31:09.662264 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zflwk" podStartSLOduration=2.230236528 podStartE2EDuration="4.662239984s" podCreationTimestamp="2026-02-19 05:31:05 +0000 UTC" firstStartedPulling="2026-02-19 05:31:06.516432513 +0000 UTC m=+362.549755112" lastFinishedPulling="2026-02-19 05:31:08.948435969 +0000 UTC m=+364.981758568" observedRunningTime="2026-02-19 05:31:09.651990623 +0000 UTC m=+365.685313192" watchObservedRunningTime="2026-02-19 05:31:09.662239984 +0000 UTC m=+365.695562563" Feb 19 05:31:10 crc kubenswrapper[5012]: I0219 05:31:10.606801 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj5sc" event={"ID":"b03ab861-19bb-4215-9b19-990a14b35367","Type":"ContainerStarted","Data":"ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80"} Feb 19 05:31:10 crc kubenswrapper[5012]: I0219 05:31:10.625935 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bj5sc" podStartSLOduration=2.201614118 podStartE2EDuration="4.625905372s" podCreationTimestamp="2026-02-19 05:31:06 +0000 UTC" firstStartedPulling="2026-02-19 05:31:07.574632893 +0000 UTC m=+363.607955462" lastFinishedPulling="2026-02-19 05:31:09.998924127 +0000 UTC m=+366.032246716" observedRunningTime="2026-02-19 05:31:10.624663328 +0000 UTC m=+366.657985927" watchObservedRunningTime="2026-02-19 05:31:10.625905372 +0000 UTC m=+366.659227951" Feb 19 05:31:13 crc kubenswrapper[5012]: I0219 05:31:13.898491 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:13 crc kubenswrapper[5012]: I0219 05:31:13.899123 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:14 crc kubenswrapper[5012]: I0219 05:31:14.114040 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:14 crc kubenswrapper[5012]: I0219 05:31:14.114646 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:14 crc kubenswrapper[5012]: I0219 05:31:14.202338 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:14 crc kubenswrapper[5012]: I0219 05:31:14.430507 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:31:14 crc kubenswrapper[5012]: I0219 05:31:14.430602 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:31:14 crc kubenswrapper[5012]: I0219 05:31:14.683794 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:14 crc kubenswrapper[5012]: I0219 05:31:14.952051 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cxb7f" podUID="cecb9fea-b109-4267-918f-765d774f76de" containerName="registry-server" probeResult="failure" output=< Feb 19 05:31:14 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 05:31:14 crc kubenswrapper[5012]: > Feb 19 05:31:15 crc kubenswrapper[5012]: I0219 05:31:15.763951 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:15 crc kubenswrapper[5012]: I0219 05:31:15.771984 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:15 crc kubenswrapper[5012]: I0219 05:31:15.843893 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:16 crc kubenswrapper[5012]: I0219 05:31:16.693862 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:16 crc kubenswrapper[5012]: I0219 05:31:16.695586 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:16 crc kubenswrapper[5012]: I0219 05:31:16.702512 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:16 crc kubenswrapper[5012]: I0219 05:31:16.767026 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:17 crc kubenswrapper[5012]: I0219 05:31:17.714286 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:23 crc kubenswrapper[5012]: I0219 05:31:23.976017 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:24 crc kubenswrapper[5012]: I0219 05:31:24.052475 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:27 crc kubenswrapper[5012]: I0219 05:31:27.342183 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:27 crc kubenswrapper[5012]: I0219 05:31:27.432279 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ljzsp"] Feb 19 05:31:34 crc kubenswrapper[5012]: I0219 05:31:34.996861 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-2hw62"] Feb 19 05:31:34 crc kubenswrapper[5012]: I0219 05:31:34.997959 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" podUID="aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" containerName="controller-manager" containerID="cri-o://c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028" gracePeriod=30 Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:34.999845 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4"] Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.000131 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" podUID="71ef05be-3ff3-4a9f-b039-19c1840d1e2b" containerName="route-controller-manager" containerID="cri-o://c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956" gracePeriod=30 Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.444178 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.456922 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.563604 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-proxy-ca-bundles\") pod \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.563716 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z72ss\" (UniqueName: \"kubernetes.io/projected/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-kube-api-access-z72ss\") pod \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.563801 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-config\") pod \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.563870 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qr8w\" (UniqueName: \"kubernetes.io/projected/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-kube-api-access-4qr8w\") pod \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.563952 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-client-ca\") pod \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.564001 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-serving-cert\") pod \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.564037 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-config\") pod \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.564109 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-serving-cert\") pod \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.564177 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-client-ca\") pod \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.565128 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-config" (OuterVolumeSpecName: "config") pod "71ef05be-3ff3-4a9f-b039-19c1840d1e2b" (UID: "71ef05be-3ff3-4a9f-b039-19c1840d1e2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.565238 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-client-ca" (OuterVolumeSpecName: "client-ca") pod "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" (UID: "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.565371 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-config" (OuterVolumeSpecName: "config") pod "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" (UID: "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.565403 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" (UID: "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.566323 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-client-ca" (OuterVolumeSpecName: "client-ca") pod "71ef05be-3ff3-4a9f-b039-19c1840d1e2b" (UID: "71ef05be-3ff3-4a9f-b039-19c1840d1e2b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.572969 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "71ef05be-3ff3-4a9f-b039-19c1840d1e2b" (UID: "71ef05be-3ff3-4a9f-b039-19c1840d1e2b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.573393 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-kube-api-access-4qr8w" (OuterVolumeSpecName: "kube-api-access-4qr8w") pod "71ef05be-3ff3-4a9f-b039-19c1840d1e2b" (UID: "71ef05be-3ff3-4a9f-b039-19c1840d1e2b"). InnerVolumeSpecName "kube-api-access-4qr8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.573523 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-kube-api-access-z72ss" (OuterVolumeSpecName: "kube-api-access-z72ss") pod "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" (UID: "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065"). InnerVolumeSpecName "kube-api-access-z72ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.574406 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" (UID: "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.665965 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.665999 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qr8w\" (UniqueName: \"kubernetes.io/projected/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-kube-api-access-4qr8w\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.666011 5012 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.666024 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.666033 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.666041 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.666050 5012 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.666059 5012 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.666068 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z72ss\" (UniqueName: \"kubernetes.io/projected/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-kube-api-access-z72ss\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.814953 5012 generic.go:334] "Generic (PLEG): container finished" podID="aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" containerID="c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028" exitCode=0 Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.815016 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" event={"ID":"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065","Type":"ContainerDied","Data":"c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028"} Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.815046 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" event={"ID":"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065","Type":"ContainerDied","Data":"241f515f82f6fd136ce265688cc81997f56dc2b17fde97525591ae1b17f15e90"} Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.815054 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.815063 5012 scope.go:117] "RemoveContainer" containerID="c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.819011 5012 generic.go:334] "Generic (PLEG): container finished" podID="71ef05be-3ff3-4a9f-b039-19c1840d1e2b" containerID="c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956" exitCode=0 Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.819085 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" event={"ID":"71ef05be-3ff3-4a9f-b039-19c1840d1e2b","Type":"ContainerDied","Data":"c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956"} Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.819119 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" event={"ID":"71ef05be-3ff3-4a9f-b039-19c1840d1e2b","Type":"ContainerDied","Data":"dfd61f82ed50896c324968c9e67ea833599f37a8a37e450778101c6d08037e66"} Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.819127 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.832977 5012 scope.go:117] "RemoveContainer" containerID="c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028" Feb 19 05:31:35 crc kubenswrapper[5012]: E0219 05:31:35.833642 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028\": container with ID starting with c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028 not found: ID does not exist" containerID="c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.833691 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028"} err="failed to get container status \"c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028\": rpc error: code = NotFound desc = could not find container \"c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028\": container with ID starting with c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028 not found: ID does not exist" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.833730 5012 scope.go:117] "RemoveContainer" containerID="c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.851480 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-2hw62"] Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.854175 5012 scope.go:117] "RemoveContainer" containerID="c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956" Feb 19 05:31:35 crc kubenswrapper[5012]: E0219 05:31:35.855211 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956\": container with ID starting with c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956 not found: ID does not exist" containerID="c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.855246 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956"} err="failed to get container status \"c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956\": rpc error: code = NotFound desc = could not find container \"c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956\": container with ID starting with c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956 not found: ID does not exist" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.862980 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-2hw62"] Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.866787 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4"] Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.870716 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4"] Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.462100 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58679ff78b-j5lgb"] Feb 19 05:31:36 crc kubenswrapper[5012]: E0219 05:31:36.462578 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" containerName="controller-manager" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.462599 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" containerName="controller-manager" Feb 19 05:31:36 crc kubenswrapper[5012]: E0219 05:31:36.462634 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ef05be-3ff3-4a9f-b039-19c1840d1e2b" containerName="route-controller-manager" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.462643 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ef05be-3ff3-4a9f-b039-19c1840d1e2b" containerName="route-controller-manager" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.462988 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" containerName="controller-manager" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.463029 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ef05be-3ff3-4a9f-b039-19c1840d1e2b" containerName="route-controller-manager" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.464358 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.467622 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.467884 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.469842 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs"] Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.469892 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.470192 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.471260 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.473410 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.480553 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.480924 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.481176 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.481294 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58679ff78b-j5lgb"] Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.481522 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.481843 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.481858 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.481857 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.484988 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.508219 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs"] Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.581194 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brtl6\" (UniqueName: \"kubernetes.io/projected/ed296d68-b4cc-4931-8424-35586d5d0570-kube-api-access-brtl6\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.581264 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-proxy-ca-bundles\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.581297 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/414aed21-07fc-4c2c-90fb-8fc90fd728e8-client-ca\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.581364 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed296d68-b4cc-4931-8424-35586d5d0570-serving-cert\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.581399 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn8g6\" (UniqueName: \"kubernetes.io/projected/414aed21-07fc-4c2c-90fb-8fc90fd728e8-kube-api-access-zn8g6\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.581564 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414aed21-07fc-4c2c-90fb-8fc90fd728e8-config\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.581679 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-client-ca\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.581851 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/414aed21-07fc-4c2c-90fb-8fc90fd728e8-serving-cert\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.582000 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-config\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.682991 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed296d68-b4cc-4931-8424-35586d5d0570-serving-cert\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683029 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn8g6\" (UniqueName: \"kubernetes.io/projected/414aed21-07fc-4c2c-90fb-8fc90fd728e8-kube-api-access-zn8g6\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683052 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414aed21-07fc-4c2c-90fb-8fc90fd728e8-config\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683075 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-client-ca\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683104 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/414aed21-07fc-4c2c-90fb-8fc90fd728e8-serving-cert\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683128 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-config\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683164 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brtl6\" (UniqueName: \"kubernetes.io/projected/ed296d68-b4cc-4931-8424-35586d5d0570-kube-api-access-brtl6\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683199 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-proxy-ca-bundles\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683217 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/414aed21-07fc-4c2c-90fb-8fc90fd728e8-client-ca\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683958 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/414aed21-07fc-4c2c-90fb-8fc90fd728e8-client-ca\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.684942 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414aed21-07fc-4c2c-90fb-8fc90fd728e8-config\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.685899 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-proxy-ca-bundles\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.686639 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed296d68-b4cc-4931-8424-35586d5d0570-serving-cert\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.686676 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-client-ca\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.690231 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-config\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.693698 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/414aed21-07fc-4c2c-90fb-8fc90fd728e8-serving-cert\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.709890 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ef05be-3ff3-4a9f-b039-19c1840d1e2b" path="/var/lib/kubelet/pods/71ef05be-3ff3-4a9f-b039-19c1840d1e2b/volumes" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.711426 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" path="/var/lib/kubelet/pods/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065/volumes" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.714392 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn8g6\" (UniqueName: \"kubernetes.io/projected/414aed21-07fc-4c2c-90fb-8fc90fd728e8-kube-api-access-zn8g6\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.721834 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brtl6\" (UniqueName: \"kubernetes.io/projected/ed296d68-b4cc-4931-8424-35586d5d0570-kube-api-access-brtl6\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.810694 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.829327 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.319001 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58679ff78b-j5lgb"] Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.392977 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs"] Feb 19 05:31:37 crc kubenswrapper[5012]: W0219 05:31:37.410399 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod414aed21_07fc_4c2c_90fb_8fc90fd728e8.slice/crio-b59addad597d805858e7df6745d7f32a6eab781ca5a89a62b0f0b5cc03a4475a WatchSource:0}: Error finding container b59addad597d805858e7df6745d7f32a6eab781ca5a89a62b0f0b5cc03a4475a: Status 404 returned error can't find the container with id b59addad597d805858e7df6745d7f32a6eab781ca5a89a62b0f0b5cc03a4475a Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.848566 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" event={"ID":"414aed21-07fc-4c2c-90fb-8fc90fd728e8","Type":"ContainerStarted","Data":"0c1ab3d22bb836389dbf3d6be4f6993c174e6a8fc2395a45bd92a92446c80f3a"} Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.848622 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" event={"ID":"414aed21-07fc-4c2c-90fb-8fc90fd728e8","Type":"ContainerStarted","Data":"b59addad597d805858e7df6745d7f32a6eab781ca5a89a62b0f0b5cc03a4475a"} Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.849933 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.851154 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" event={"ID":"ed296d68-b4cc-4931-8424-35586d5d0570","Type":"ContainerStarted","Data":"93e92892510169b1f01d731736022db69c8066dcefaa042b93acfcae13a22a3a"} Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.851177 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" event={"ID":"ed296d68-b4cc-4931-8424-35586d5d0570","Type":"ContainerStarted","Data":"7650b4eb8c10c3c45bd474b512b23e3b92ec9094bb525c4df15df4e57940d45d"} Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.851753 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.869436 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.878178 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" podStartSLOduration=2.878163227 podStartE2EDuration="2.878163227s" podCreationTimestamp="2026-02-19 05:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:31:37.875211307 +0000 UTC m=+393.908533886" watchObservedRunningTime="2026-02-19 05:31:37.878163227 +0000 UTC m=+393.911485805" Feb 19 05:31:38 crc kubenswrapper[5012]: I0219 05:31:38.026850 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:38 crc kubenswrapper[5012]: I0219 05:31:38.046249 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" podStartSLOduration=3.046222083 podStartE2EDuration="3.046222083s" podCreationTimestamp="2026-02-19 05:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:31:37.895869566 +0000 UTC m=+393.929192145" watchObservedRunningTime="2026-02-19 05:31:38.046222083 +0000 UTC m=+394.079544652" Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.430489 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.431209 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.431261 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.431923 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f28c70f18d16a390f7b96cc5399b8c6c7031b7f62ee2bccc4e33b9c7c28fc6a0"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.431967 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://f28c70f18d16a390f7b96cc5399b8c6c7031b7f62ee2bccc4e33b9c7c28fc6a0" gracePeriod=600 Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.922681 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="f28c70f18d16a390f7b96cc5399b8c6c7031b7f62ee2bccc4e33b9c7c28fc6a0" exitCode=0 Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.922776 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"f28c70f18d16a390f7b96cc5399b8c6c7031b7f62ee2bccc4e33b9c7c28fc6a0"} Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.923704 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"8431b8eb7363f7603ff116fd5d3f9ab3ed3f378fbd36db4efaaa1521cb246ddd"} Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.923807 5012 scope.go:117] "RemoveContainer" containerID="5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049" Feb 19 05:31:52 crc kubenswrapper[5012]: I0219 05:31:52.496995 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" podUID="70e7a5c6-0abf-4c78-8087-958a19264b49" containerName="registry" containerID="cri-o://83d6198005201c652f989f86934dfd0087e9ca81b54e4a24ea15985ceb37c2cd" gracePeriod=30 Feb 19 05:31:52 crc kubenswrapper[5012]: I0219 05:31:52.994728 5012 generic.go:334] "Generic (PLEG): container finished" podID="70e7a5c6-0abf-4c78-8087-958a19264b49" containerID="83d6198005201c652f989f86934dfd0087e9ca81b54e4a24ea15985ceb37c2cd" exitCode=0 Feb 19 05:31:52 crc kubenswrapper[5012]: I0219 05:31:52.994861 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" event={"ID":"70e7a5c6-0abf-4c78-8087-958a19264b49","Type":"ContainerDied","Data":"83d6198005201c652f989f86934dfd0087e9ca81b54e4a24ea15985ceb37c2cd"} Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.071043 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.153243 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-certificates\") pod \"70e7a5c6-0abf-4c78-8087-958a19264b49\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.153424 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/70e7a5c6-0abf-4c78-8087-958a19264b49-installation-pull-secrets\") pod \"70e7a5c6-0abf-4c78-8087-958a19264b49\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.153505 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-tls\") pod \"70e7a5c6-0abf-4c78-8087-958a19264b49\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.153567 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/70e7a5c6-0abf-4c78-8087-958a19264b49-ca-trust-extracted\") pod \"70e7a5c6-0abf-4c78-8087-958a19264b49\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.153791 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"70e7a5c6-0abf-4c78-8087-958a19264b49\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.153851 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-bound-sa-token\") pod \"70e7a5c6-0abf-4c78-8087-958a19264b49\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.153896 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmhxd\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-kube-api-access-pmhxd\") pod \"70e7a5c6-0abf-4c78-8087-958a19264b49\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.153933 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-trusted-ca\") pod \"70e7a5c6-0abf-4c78-8087-958a19264b49\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.156173 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "70e7a5c6-0abf-4c78-8087-958a19264b49" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.156249 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "70e7a5c6-0abf-4c78-8087-958a19264b49" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.169677 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "70e7a5c6-0abf-4c78-8087-958a19264b49" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.171460 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70e7a5c6-0abf-4c78-8087-958a19264b49-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "70e7a5c6-0abf-4c78-8087-958a19264b49" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.171477 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-kube-api-access-pmhxd" (OuterVolumeSpecName: "kube-api-access-pmhxd") pod "70e7a5c6-0abf-4c78-8087-958a19264b49" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49"). InnerVolumeSpecName "kube-api-access-pmhxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.175063 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70e7a5c6-0abf-4c78-8087-958a19264b49-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "70e7a5c6-0abf-4c78-8087-958a19264b49" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.177744 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "70e7a5c6-0abf-4c78-8087-958a19264b49" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.177859 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "70e7a5c6-0abf-4c78-8087-958a19264b49" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.256167 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmhxd\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-kube-api-access-pmhxd\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.256864 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.256890 5012 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.256909 5012 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/70e7a5c6-0abf-4c78-8087-958a19264b49-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.256932 5012 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.256949 5012 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/70e7a5c6-0abf-4c78-8087-958a19264b49-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.256967 5012 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:54 crc kubenswrapper[5012]: I0219 05:31:54.006748 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" event={"ID":"70e7a5c6-0abf-4c78-8087-958a19264b49","Type":"ContainerDied","Data":"b4a4a4ebd6fc7c45c5fc88ca24394f42a5591b27d7679378f83e52a1da7bb083"} Feb 19 05:31:54 crc kubenswrapper[5012]: I0219 05:31:54.006840 5012 scope.go:117] "RemoveContainer" containerID="83d6198005201c652f989f86934dfd0087e9ca81b54e4a24ea15985ceb37c2cd" Feb 19 05:31:54 crc kubenswrapper[5012]: I0219 05:31:54.007046 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:31:54 crc kubenswrapper[5012]: I0219 05:31:54.060585 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ljzsp"] Feb 19 05:31:54 crc kubenswrapper[5012]: I0219 05:31:54.068938 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ljzsp"] Feb 19 05:31:54 crc kubenswrapper[5012]: I0219 05:31:54.719127 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70e7a5c6-0abf-4c78-8087-958a19264b49" path="/var/lib/kubelet/pods/70e7a5c6-0abf-4c78-8087-958a19264b49/volumes" Feb 19 05:33:44 crc kubenswrapper[5012]: I0219 05:33:44.430872 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:33:44 crc kubenswrapper[5012]: I0219 05:33:44.431723 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:34:04 crc kubenswrapper[5012]: I0219 05:34:04.948798 5012 scope.go:117] "RemoveContainer" containerID="06936ac625543a23fe6a94c680d400a453b3652063590fadf1140acbd164e331" Feb 19 05:34:04 crc kubenswrapper[5012]: I0219 05:34:04.977808 5012 scope.go:117] "RemoveContainer" containerID="48aada40317b892d9a223a57a3ac3503ec0ff8bc3ff5df783ac9de195fd3495f" Feb 19 05:34:05 crc kubenswrapper[5012]: I0219 05:34:05.008715 5012 scope.go:117] "RemoveContainer" containerID="0fe51da344cbaacf6697c74dcff49e7182b9df6468c8ccbfb60f3cd9e38eda3d" Feb 19 05:34:05 crc kubenswrapper[5012]: I0219 05:34:05.032859 5012 scope.go:117] "RemoveContainer" containerID="dc1a50c23707e41d34121953c7a07c7a6d9a618fec62090df956fa84f7fc89cb" Feb 19 05:34:14 crc kubenswrapper[5012]: I0219 05:34:14.431218 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:34:14 crc kubenswrapper[5012]: I0219 05:34:14.431641 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:34:32 crc kubenswrapper[5012]: I0219 05:34:32.983642 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-w66zf"] Feb 19 05:34:32 crc kubenswrapper[5012]: E0219 05:34:32.984802 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e7a5c6-0abf-4c78-8087-958a19264b49" containerName="registry" Feb 19 05:34:32 crc kubenswrapper[5012]: I0219 05:34:32.984832 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e7a5c6-0abf-4c78-8087-958a19264b49" containerName="registry" Feb 19 05:34:32 crc kubenswrapper[5012]: I0219 05:34:32.985095 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e7a5c6-0abf-4c78-8087-958a19264b49" containerName="registry" Feb 19 05:34:32 crc kubenswrapper[5012]: I0219 05:34:32.985991 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-w66zf" Feb 19 05:34:32 crc kubenswrapper[5012]: I0219 05:34:32.989229 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 19 05:34:32 crc kubenswrapper[5012]: I0219 05:34:32.989608 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 19 05:34:32 crc kubenswrapper[5012]: I0219 05:34:32.989958 5012 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-djjz2" Feb 19 05:34:32 crc kubenswrapper[5012]: I0219 05:34:32.994612 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-sq68l"] Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.003194 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-w66zf"] Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.003299 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-sq68l" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.005821 5012 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lgrng" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.017549 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-sq68l"] Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.033224 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-drndq"] Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.034280 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.038834 5012 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-2qrkh" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.048956 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-drndq"] Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.057094 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp74w\" (UniqueName: \"kubernetes.io/projected/3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02-kube-api-access-sp74w\") pod \"cert-manager-858654f9db-sq68l\" (UID: \"3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02\") " pod="cert-manager/cert-manager-858654f9db-sq68l" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.057160 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dhl6\" (UniqueName: \"kubernetes.io/projected/4b5870bd-8fb3-4eef-a893-f31ce8bb1506-kube-api-access-4dhl6\") pod \"cert-manager-cainjector-cf98fcc89-w66zf\" (UID: \"4b5870bd-8fb3-4eef-a893-f31ce8bb1506\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-w66zf" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.057242 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvl4r\" (UniqueName: \"kubernetes.io/projected/53138562-0907-4b72-b228-21ef0c561f57-kube-api-access-mvl4r\") pod \"cert-manager-webhook-687f57d79b-drndq\" (UID: \"53138562-0907-4b72-b228-21ef0c561f57\") " pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.159288 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvl4r\" (UniqueName: \"kubernetes.io/projected/53138562-0907-4b72-b228-21ef0c561f57-kube-api-access-mvl4r\") pod \"cert-manager-webhook-687f57d79b-drndq\" (UID: \"53138562-0907-4b72-b228-21ef0c561f57\") " pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.159812 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp74w\" (UniqueName: \"kubernetes.io/projected/3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02-kube-api-access-sp74w\") pod \"cert-manager-858654f9db-sq68l\" (UID: \"3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02\") " pod="cert-manager/cert-manager-858654f9db-sq68l" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.160129 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dhl6\" (UniqueName: \"kubernetes.io/projected/4b5870bd-8fb3-4eef-a893-f31ce8bb1506-kube-api-access-4dhl6\") pod \"cert-manager-cainjector-cf98fcc89-w66zf\" (UID: \"4b5870bd-8fb3-4eef-a893-f31ce8bb1506\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-w66zf" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.185525 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvl4r\" (UniqueName: \"kubernetes.io/projected/53138562-0907-4b72-b228-21ef0c561f57-kube-api-access-mvl4r\") pod \"cert-manager-webhook-687f57d79b-drndq\" (UID: \"53138562-0907-4b72-b228-21ef0c561f57\") " pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.185631 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dhl6\" (UniqueName: \"kubernetes.io/projected/4b5870bd-8fb3-4eef-a893-f31ce8bb1506-kube-api-access-4dhl6\") pod \"cert-manager-cainjector-cf98fcc89-w66zf\" (UID: \"4b5870bd-8fb3-4eef-a893-f31ce8bb1506\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-w66zf" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.192385 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp74w\" (UniqueName: \"kubernetes.io/projected/3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02-kube-api-access-sp74w\") pod \"cert-manager-858654f9db-sq68l\" (UID: \"3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02\") " pod="cert-manager/cert-manager-858654f9db-sq68l" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.311431 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-w66zf" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.322708 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-sq68l" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.352246 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.561884 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-sq68l"] Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.572190 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.603994 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-drndq"] Feb 19 05:34:33 crc kubenswrapper[5012]: W0219 05:34:33.606513 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53138562_0907_4b72_b228_21ef0c561f57.slice/crio-3e6a24b6c18a53eb562d357cb0bb03d93fae107003549eff7626cd6bd20d80f0 WatchSource:0}: Error finding container 3e6a24b6c18a53eb562d357cb0bb03d93fae107003549eff7626cd6bd20d80f0: Status 404 returned error can't find the container with id 3e6a24b6c18a53eb562d357cb0bb03d93fae107003549eff7626cd6bd20d80f0 Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.726913 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-w66zf"] Feb 19 05:34:34 crc kubenswrapper[5012]: I0219 05:34:34.421900 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-w66zf" event={"ID":"4b5870bd-8fb3-4eef-a893-f31ce8bb1506","Type":"ContainerStarted","Data":"2443a21924ca9a9e9e636821e42f8ff74faeae4ba62ab4c8a14c54979eb024cc"} Feb 19 05:34:34 crc kubenswrapper[5012]: I0219 05:34:34.425262 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-sq68l" event={"ID":"3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02","Type":"ContainerStarted","Data":"c81be4549ba714461e1ad842137ec2e97cb94b59a4f7124a63440efa1a2d69ca"} Feb 19 05:34:34 crc kubenswrapper[5012]: I0219 05:34:34.431721 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" event={"ID":"53138562-0907-4b72-b228-21ef0c561f57","Type":"ContainerStarted","Data":"3e6a24b6c18a53eb562d357cb0bb03d93fae107003549eff7626cd6bd20d80f0"} Feb 19 05:34:38 crc kubenswrapper[5012]: I0219 05:34:38.463464 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" event={"ID":"53138562-0907-4b72-b228-21ef0c561f57","Type":"ContainerStarted","Data":"d6ba710338d54017600bc89324ba3f087df57b4be683ec57dc23d55033488818"} Feb 19 05:34:38 crc kubenswrapper[5012]: I0219 05:34:38.464298 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" Feb 19 05:34:38 crc kubenswrapper[5012]: I0219 05:34:38.466157 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-w66zf" event={"ID":"4b5870bd-8fb3-4eef-a893-f31ce8bb1506","Type":"ContainerStarted","Data":"85baf47ad4b311ef24f933fdadb4863eea872ba69aa43e2c9aa67387a980c566"} Feb 19 05:34:38 crc kubenswrapper[5012]: I0219 05:34:38.468995 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-sq68l" event={"ID":"3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02","Type":"ContainerStarted","Data":"8444265bcbc0e077e72a2cdb09628a6f8c212c892780b849c3cca39e475b1495"} Feb 19 05:34:38 crc kubenswrapper[5012]: I0219 05:34:38.518134 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-w66zf" podStartSLOduration=2.646956552 podStartE2EDuration="6.518096116s" podCreationTimestamp="2026-02-19 05:34:32 +0000 UTC" firstStartedPulling="2026-02-19 05:34:33.742087464 +0000 UTC m=+569.775410063" lastFinishedPulling="2026-02-19 05:34:37.613227018 +0000 UTC m=+573.646549627" observedRunningTime="2026-02-19 05:34:38.511432603 +0000 UTC m=+574.544755212" watchObservedRunningTime="2026-02-19 05:34:38.518096116 +0000 UTC m=+574.551418755" Feb 19 05:34:38 crc kubenswrapper[5012]: I0219 05:34:38.521006 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" podStartSLOduration=2.481222108 podStartE2EDuration="6.520985116s" podCreationTimestamp="2026-02-19 05:34:32 +0000 UTC" firstStartedPulling="2026-02-19 05:34:33.608759211 +0000 UTC m=+569.642081780" lastFinishedPulling="2026-02-19 05:34:37.648522179 +0000 UTC m=+573.681844788" observedRunningTime="2026-02-19 05:34:38.493701131 +0000 UTC m=+574.527023760" watchObservedRunningTime="2026-02-19 05:34:38.520985116 +0000 UTC m=+574.554307725" Feb 19 05:34:38 crc kubenswrapper[5012]: I0219 05:34:38.542617 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-sq68l" podStartSLOduration=2.484787806 podStartE2EDuration="6.542584613s" podCreationTimestamp="2026-02-19 05:34:32 +0000 UTC" firstStartedPulling="2026-02-19 05:34:33.57181601 +0000 UTC m=+569.605138589" lastFinishedPulling="2026-02-19 05:34:37.629612777 +0000 UTC m=+573.662935396" observedRunningTime="2026-02-19 05:34:38.533390089 +0000 UTC m=+574.566712698" watchObservedRunningTime="2026-02-19 05:34:38.542584613 +0000 UTC m=+574.575907212" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.364158 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.387473 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8ff9w"] Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.390555 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="nbdb" containerID="cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8" gracePeriod=30 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.390462 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="sbdb" containerID="cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d" gracePeriod=30 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.390862 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4" gracePeriod=30 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.390934 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kube-rbac-proxy-node" containerID="cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0" gracePeriod=30 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.390955 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovn-acl-logging" containerID="cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771" gracePeriod=30 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.390823 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="northd" containerID="cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7" gracePeriod=30 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.396582 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovn-controller" containerID="cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6" gracePeriod=30 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.456922 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" containerID="cri-o://92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" gracePeriod=30 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.509391 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/2.log" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.509715 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/1.log" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.509745 5012 generic.go:334] "Generic (PLEG): container finished" podID="e7a04e36-fbaa-4de1-871a-7225433eebb0" containerID="9dee99959c58361002b098beb811940fb74ac9f7c81b432ebe5142128b4aec05" exitCode=2 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.509771 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lkrsg" event={"ID":"e7a04e36-fbaa-4de1-871a-7225433eebb0","Type":"ContainerDied","Data":"9dee99959c58361002b098beb811940fb74ac9f7c81b432ebe5142128b4aec05"} Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.509802 5012 scope.go:117] "RemoveContainer" containerID="fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.510232 5012 scope.go:117] "RemoveContainer" containerID="9dee99959c58361002b098beb811940fb74ac9f7c81b432ebe5142128b4aec05" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.510399 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lkrsg_openshift-multus(e7a04e36-fbaa-4de1-871a-7225433eebb0)\"" pod="openshift-multus/multus-lkrsg" podUID="e7a04e36-fbaa-4de1-871a-7225433eebb0" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.785477 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/3.log" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.788294 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovn-acl-logging/0.log" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.789009 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovn-controller/0.log" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.789483 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.837730 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-node-log\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.837830 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-slash\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.837901 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838012 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-bin\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838124 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-systemd-units\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838215 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-config\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838342 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-netns\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838383 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-openvswitch\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838457 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovn-node-metrics-cert\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838447 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838498 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-node-log" (OuterVolumeSpecName: "node-log") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838539 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj2rz\" (UniqueName: \"kubernetes.io/projected/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-kube-api-access-sj2rz\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838553 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-slash" (OuterVolumeSpecName: "host-slash") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838569 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-systemd\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838609 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838650 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-kubelet\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838661 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838719 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-ovn-kubernetes\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838819 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-ovn\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838894 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-netd\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838943 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-var-lib-openvswitch\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838997 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839028 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-script-lib\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839121 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-env-overrides\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839198 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-etc-openvswitch\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839281 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-log-socket\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839840 5012 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839930 5012 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839960 5012 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839979 5012 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839997 5012 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.840057 5012 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839038 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839062 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.841060 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839086 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.840119 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-log-socket" (OuterVolumeSpecName: "log-socket") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.840149 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.840246 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.840273 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.840560 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.840814 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.841435 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.846201 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-kube-api-access-sj2rz" (OuterVolumeSpecName: "kube-api-access-sj2rz") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "kube-api-access-sj2rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.847618 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.862663 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865060 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6qzx6"] Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865276 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="sbdb" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865287 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="sbdb" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865295 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865317 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865325 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovn-acl-logging" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865331 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovn-acl-logging" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865341 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865346 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865353 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="nbdb" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865360 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="nbdb" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865368 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="northd" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865374 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="northd" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865383 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovn-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865388 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovn-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865395 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865401 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865408 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kube-rbac-proxy-node" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865414 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kube-rbac-proxy-node" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865423 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865428 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865437 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865443 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865452 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kubecfg-setup" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865458 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kubecfg-setup" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865540 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865548 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kube-rbac-proxy-node" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865557 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovn-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865563 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865576 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="sbdb" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865595 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865604 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865611 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865618 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="northd" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865627 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovn-acl-logging" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865635 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="nbdb" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865919 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865925 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.866007 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.868691 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.940756 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-run-ovn-kubernetes\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.940927 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-etc-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.940991 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941029 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-slash\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941102 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-var-lib-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941164 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovn-node-metrics-cert\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941263 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-ovn\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941339 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-run-netns\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941439 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-systemd-units\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941491 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-cni-netd\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941561 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-systemd\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941637 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb25b\" (UniqueName: \"kubernetes.io/projected/6ec62621-f2fc-41ba-b7d6-9a19035ca269-kube-api-access-lb25b\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941784 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-log-socket\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941888 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-kubelet\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941971 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-node-log\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942036 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-cni-bin\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942121 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovnkube-config\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942400 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovnkube-script-lib\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942514 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942568 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-env-overrides\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942760 5012 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942803 5012 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942824 5012 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942848 5012 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942874 5012 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942897 5012 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942919 5012 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942942 5012 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942963 5012 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.943052 5012 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.943072 5012 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.943089 5012 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.943105 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj2rz\" (UniqueName: \"kubernetes.io/projected/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-kube-api-access-sj2rz\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.943122 5012 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.044838 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-kubelet\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.044913 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-node-log\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.044938 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-cni-bin\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.044972 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovnkube-config\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.044999 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovnkube-script-lib\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045022 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045030 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-kubelet\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045114 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-node-log\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045177 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-cni-bin\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045042 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-env-overrides\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045708 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-run-ovn-kubernetes\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045825 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-etc-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045746 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-run-ovn-kubernetes\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045958 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-etc-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045698 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045925 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046063 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-slash\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046117 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-var-lib-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046128 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-env-overrides\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046156 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovn-node-metrics-cert\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046137 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovnkube-config\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046175 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-slash\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046198 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-ovn\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046165 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-var-lib-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046290 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-run-netns\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046344 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-run-netns\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046265 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-ovn\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046378 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-systemd-units\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046433 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-cni-netd\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046446 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-systemd-units\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046498 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-systemd\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046499 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-cni-netd\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046534 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-systemd\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046541 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb25b\" (UniqueName: \"kubernetes.io/projected/6ec62621-f2fc-41ba-b7d6-9a19035ca269-kube-api-access-lb25b\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046602 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-log-socket\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046687 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-log-socket\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046799 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovnkube-script-lib\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.047001 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.049423 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovn-node-metrics-cert\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.069726 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb25b\" (UniqueName: \"kubernetes.io/projected/6ec62621-f2fc-41ba-b7d6-9a19035ca269-kube-api-access-lb25b\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.228443 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.430876 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.430953 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.431012 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.431807 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8431b8eb7363f7603ff116fd5d3f9ab3ed3f378fbd36db4efaaa1521cb246ddd"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.431895 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://8431b8eb7363f7603ff116fd5d3f9ab3ed3f378fbd36db4efaaa1521cb246ddd" gracePeriod=600 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.519281 5012 generic.go:334] "Generic (PLEG): container finished" podID="6ec62621-f2fc-41ba-b7d6-9a19035ca269" containerID="a4f9d7bb55f0f3df054c5cd07c7f254b44e014e914d904ca33bd67f4a3dd4a9c" exitCode=0 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.519380 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerDied","Data":"a4f9d7bb55f0f3df054c5cd07c7f254b44e014e914d904ca33bd67f4a3dd4a9c"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.519411 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"4cb6291a83582275e968fc77dddad00d94f4693222836996f1df9d2be754f112"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.522189 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/2.log" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.528461 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/3.log" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530175 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovn-acl-logging/0.log" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530609 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovn-controller/0.log" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530941 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" exitCode=0 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530961 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d" exitCode=0 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530969 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8" exitCode=0 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530975 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7" exitCode=0 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530983 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4" exitCode=0 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530991 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0" exitCode=0 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530999 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771" exitCode=143 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531005 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6" exitCode=143 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531021 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531043 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531053 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531061 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531070 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531078 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531088 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531096 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531101 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531106 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531111 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531116 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531149 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531155 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531161 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531169 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531179 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531187 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531193 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531200 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531207 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531214 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531220 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531226 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531231 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531239 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531248 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531257 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531263 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531269 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531275 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531280 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531285 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531290 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531295 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531314 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531319 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531326 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"6412d35e0c37d9d105ee4ca82031f54078f7add4cd5d9abd98a4a8c14bd96adb"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531333 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531339 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531344 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531350 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531354 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531360 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531365 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531370 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531375 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531380 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531393 5012 scope.go:117] "RemoveContainer" containerID="92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531489 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.620177 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.661778 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8ff9w"] Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.665312 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8ff9w"] Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.666857 5012 scope.go:117] "RemoveContainer" containerID="99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.686056 5012 scope.go:117] "RemoveContainer" containerID="0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.711192 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" path="/var/lib/kubelet/pods/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/volumes" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.721666 5012 scope.go:117] "RemoveContainer" containerID="9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.754603 5012 scope.go:117] "RemoveContainer" containerID="988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.773339 5012 scope.go:117] "RemoveContainer" containerID="c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.802748 5012 scope.go:117] "RemoveContainer" containerID="ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.820473 5012 scope.go:117] "RemoveContainer" containerID="b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.843758 5012 scope.go:117] "RemoveContainer" containerID="e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.890316 5012 scope.go:117] "RemoveContainer" containerID="92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.890844 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": container with ID starting with 92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb not found: ID does not exist" containerID="92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.890874 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} err="failed to get container status \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": rpc error: code = NotFound desc = could not find container \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": container with ID starting with 92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.890894 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.891437 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": container with ID starting with b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f not found: ID does not exist" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.891480 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} err="failed to get container status \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": rpc error: code = NotFound desc = could not find container \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": container with ID starting with b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.891507 5012 scope.go:117] "RemoveContainer" containerID="99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.891901 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": container with ID starting with 99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d not found: ID does not exist" containerID="99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.891927 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} err="failed to get container status \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": rpc error: code = NotFound desc = could not find container \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": container with ID starting with 99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.891942 5012 scope.go:117] "RemoveContainer" containerID="0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.892462 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": container with ID starting with 0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8 not found: ID does not exist" containerID="0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.892494 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} err="failed to get container status \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": rpc error: code = NotFound desc = could not find container \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": container with ID starting with 0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.892507 5012 scope.go:117] "RemoveContainer" containerID="9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.892798 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": container with ID starting with 9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7 not found: ID does not exist" containerID="9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.892817 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} err="failed to get container status \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": rpc error: code = NotFound desc = could not find container \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": container with ID starting with 9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.892828 5012 scope.go:117] "RemoveContainer" containerID="988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.893115 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": container with ID starting with 988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4 not found: ID does not exist" containerID="988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.893134 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} err="failed to get container status \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": rpc error: code = NotFound desc = could not find container \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": container with ID starting with 988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.893145 5012 scope.go:117] "RemoveContainer" containerID="c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.893469 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": container with ID starting with c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0 not found: ID does not exist" containerID="c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.893489 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} err="failed to get container status \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": rpc error: code = NotFound desc = could not find container \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": container with ID starting with c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.893501 5012 scope.go:117] "RemoveContainer" containerID="ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.893829 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": container with ID starting with ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771 not found: ID does not exist" containerID="ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.893854 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} err="failed to get container status \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": rpc error: code = NotFound desc = could not find container \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": container with ID starting with ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.893869 5012 scope.go:117] "RemoveContainer" containerID="b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.894222 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": container with ID starting with b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6 not found: ID does not exist" containerID="b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.894269 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} err="failed to get container status \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": rpc error: code = NotFound desc = could not find container \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": container with ID starting with b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.894332 5012 scope.go:117] "RemoveContainer" containerID="e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.894730 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": container with ID starting with e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c not found: ID does not exist" containerID="e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.894832 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} err="failed to get container status \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": rpc error: code = NotFound desc = could not find container \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": container with ID starting with e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.894857 5012 scope.go:117] "RemoveContainer" containerID="92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.895269 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} err="failed to get container status \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": rpc error: code = NotFound desc = could not find container \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": container with ID starting with 92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.895388 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.895733 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} err="failed to get container status \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": rpc error: code = NotFound desc = could not find container \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": container with ID starting with b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.895782 5012 scope.go:117] "RemoveContainer" containerID="99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.896380 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} err="failed to get container status \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": rpc error: code = NotFound desc = could not find container \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": container with ID starting with 99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.896416 5012 scope.go:117] "RemoveContainer" containerID="0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.897710 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} err="failed to get container status \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": rpc error: code = NotFound desc = could not find container \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": container with ID starting with 0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.897747 5012 scope.go:117] "RemoveContainer" containerID="9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.898139 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} err="failed to get container status \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": rpc error: code = NotFound desc = could not find container \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": container with ID starting with 9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.898171 5012 scope.go:117] "RemoveContainer" containerID="988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.899200 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} err="failed to get container status \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": rpc error: code = NotFound desc = could not find container \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": container with ID starting with 988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.899237 5012 scope.go:117] "RemoveContainer" containerID="c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.899637 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} err="failed to get container status \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": rpc error: code = NotFound desc = could not find container \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": container with ID starting with c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.899673 5012 scope.go:117] "RemoveContainer" containerID="ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.900187 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} err="failed to get container status \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": rpc error: code = NotFound desc = could not find container \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": container with ID starting with ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.900225 5012 scope.go:117] "RemoveContainer" containerID="b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.900824 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} err="failed to get container status \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": rpc error: code = NotFound desc = could not find container \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": container with ID starting with b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.900859 5012 scope.go:117] "RemoveContainer" containerID="e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.901597 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} err="failed to get container status \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": rpc error: code = NotFound desc = could not find container \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": container with ID starting with e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.901634 5012 scope.go:117] "RemoveContainer" containerID="92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.901963 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} err="failed to get container status \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": rpc error: code = NotFound desc = could not find container \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": container with ID starting with 92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.901998 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.902607 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} err="failed to get container status \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": rpc error: code = NotFound desc = could not find container \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": container with ID starting with b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.902642 5012 scope.go:117] "RemoveContainer" containerID="99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.903031 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} err="failed to get container status \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": rpc error: code = NotFound desc = could not find container \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": container with ID starting with 99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.903068 5012 scope.go:117] "RemoveContainer" containerID="0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.903576 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} err="failed to get container status \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": rpc error: code = NotFound desc = could not find container \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": container with ID starting with 0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.903609 5012 scope.go:117] "RemoveContainer" containerID="9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.904163 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} err="failed to get container status \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": rpc error: code = NotFound desc = could not find container \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": container with ID starting with 9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.904196 5012 scope.go:117] "RemoveContainer" containerID="988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.904605 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} err="failed to get container status \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": rpc error: code = NotFound desc = could not find container \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": container with ID starting with 988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.904638 5012 scope.go:117] "RemoveContainer" containerID="c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.905143 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} err="failed to get container status \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": rpc error: code = NotFound desc = could not find container \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": container with ID starting with c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.905176 5012 scope.go:117] "RemoveContainer" containerID="ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.905660 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} err="failed to get container status \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": rpc error: code = NotFound desc = could not find container \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": container with ID starting with ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.905683 5012 scope.go:117] "RemoveContainer" containerID="b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.905961 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} err="failed to get container status \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": rpc error: code = NotFound desc = could not find container \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": container with ID starting with b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.905985 5012 scope.go:117] "RemoveContainer" containerID="e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.906398 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} err="failed to get container status \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": rpc error: code = NotFound desc = could not find container \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": container with ID starting with e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.906421 5012 scope.go:117] "RemoveContainer" containerID="92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.906726 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} err="failed to get container status \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": rpc error: code = NotFound desc = could not find container \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": container with ID starting with 92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.906749 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.907154 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} err="failed to get container status \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": rpc error: code = NotFound desc = could not find container \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": container with ID starting with b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.907179 5012 scope.go:117] "RemoveContainer" containerID="99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.907681 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} err="failed to get container status \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": rpc error: code = NotFound desc = could not find container \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": container with ID starting with 99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.907705 5012 scope.go:117] "RemoveContainer" containerID="0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.908027 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} err="failed to get container status \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": rpc error: code = NotFound desc = could not find container \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": container with ID starting with 0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.908051 5012 scope.go:117] "RemoveContainer" containerID="9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.908530 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} err="failed to get container status \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": rpc error: code = NotFound desc = could not find container \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": container with ID starting with 9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.908556 5012 scope.go:117] "RemoveContainer" containerID="988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.909720 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} err="failed to get container status \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": rpc error: code = NotFound desc = could not find container \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": container with ID starting with 988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.909747 5012 scope.go:117] "RemoveContainer" containerID="c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.911626 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} err="failed to get container status \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": rpc error: code = NotFound desc = could not find container \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": container with ID starting with c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.911691 5012 scope.go:117] "RemoveContainer" containerID="ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.912210 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} err="failed to get container status \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": rpc error: code = NotFound desc = could not find container \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": container with ID starting with ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.912232 5012 scope.go:117] "RemoveContainer" containerID="b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.912795 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} err="failed to get container status \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": rpc error: code = NotFound desc = could not find container \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": container with ID starting with b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.912825 5012 scope.go:117] "RemoveContainer" containerID="e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.913210 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} err="failed to get container status \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": rpc error: code = NotFound desc = could not find container \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": container with ID starting with e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.913272 5012 scope.go:117] "RemoveContainer" containerID="92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.913673 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} err="failed to get container status \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": rpc error: code = NotFound desc = could not find container \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": container with ID starting with 92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb not found: ID does not exist" Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.555843 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"83b227eb429e2994b96c5bdff9ce49cbc53ebb95afb0ef4ccd253f72f62a5d1d"} Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.556198 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"2f9be6835fcf01065e2130c3e3487efb2ef54eee456cd2bd27b15435706ccde1"} Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.556501 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"b93bc1b9b7f81328c8872f144656ad31f1788dc7f59cf686cac4e0ec3a4842f7"} Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.556521 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"e6bc3a7a41ac63fd15ffe32ba2ce551e4737100351913913b47699bdb695871d"} Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.556537 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"b37addd1b4f359e4bb39d9bdd286dc3bfa8faee43aa3998695001760be3c6db8"} Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.556553 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"596417a17e986c10a3d00c55f45c692c750dcabb022bc7d916d885bff4108ea8"} Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.560208 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="8431b8eb7363f7603ff116fd5d3f9ab3ed3f378fbd36db4efaaa1521cb246ddd" exitCode=0 Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.560349 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"8431b8eb7363f7603ff116fd5d3f9ab3ed3f378fbd36db4efaaa1521cb246ddd"} Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.560418 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"2fa30f17f6fec33303fdb3b3cb4c275384acd11d008a1c182ee7a051d5288089"} Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.560459 5012 scope.go:117] "RemoveContainer" containerID="f28c70f18d16a390f7b96cc5399b8c6c7031b7f62ee2bccc4e33b9c7c28fc6a0" Feb 19 05:34:48 crc kubenswrapper[5012]: I0219 05:34:48.596011 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"fe8f8afb2d48483b5ab4b267574be9c0db01ed9361a3f5c4ff9ac20c578a82b2"} Feb 19 05:34:50 crc kubenswrapper[5012]: I0219 05:34:50.611839 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"29d0aacb9914b3a0468b4ce7f4f0e8db9c2aeda872ba1bcb4d157e2c3d94a9a3"} Feb 19 05:34:50 crc kubenswrapper[5012]: I0219 05:34:50.617401 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:50 crc kubenswrapper[5012]: I0219 05:34:50.617460 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:50 crc kubenswrapper[5012]: I0219 05:34:50.617482 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:50 crc kubenswrapper[5012]: I0219 05:34:50.660713 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:50 crc kubenswrapper[5012]: I0219 05:34:50.670384 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" podStartSLOduration=7.670347278 podStartE2EDuration="7.670347278s" podCreationTimestamp="2026-02-19 05:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:34:50.661066282 +0000 UTC m=+586.694388911" watchObservedRunningTime="2026-02-19 05:34:50.670347278 +0000 UTC m=+586.703669887" Feb 19 05:34:50 crc kubenswrapper[5012]: I0219 05:34:50.673923 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:57 crc kubenswrapper[5012]: I0219 05:34:57.703069 5012 scope.go:117] "RemoveContainer" containerID="9dee99959c58361002b098beb811940fb74ac9f7c81b432ebe5142128b4aec05" Feb 19 05:34:57 crc kubenswrapper[5012]: E0219 05:34:57.704024 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lkrsg_openshift-multus(e7a04e36-fbaa-4de1-871a-7225433eebb0)\"" pod="openshift-multus/multus-lkrsg" podUID="e7a04e36-fbaa-4de1-871a-7225433eebb0" Feb 19 05:35:05 crc kubenswrapper[5012]: I0219 05:35:05.088960 5012 scope.go:117] "RemoveContainer" containerID="4d96789a875fc9919836ff36dc1d21b427a832c3292532d47b588b770f2a75ed" Feb 19 05:35:05 crc kubenswrapper[5012]: I0219 05:35:05.119082 5012 scope.go:117] "RemoveContainer" containerID="bcadb8bab70733341b7bb0cee1dc27ad28111033c1f70563d157cf39fc870bc1" Feb 19 05:35:05 crc kubenswrapper[5012]: I0219 05:35:05.151716 5012 scope.go:117] "RemoveContainer" containerID="a95a4d514f0d6754b1714fed7c7959350d2abe5a30fa95a4004bef33fad2569c" Feb 19 05:35:05 crc kubenswrapper[5012]: I0219 05:35:05.179619 5012 scope.go:117] "RemoveContainer" containerID="ee07414de7a83d1212fd24fac006255c845d66e5f8765acbd5026e0f77d5182b" Feb 19 05:35:05 crc kubenswrapper[5012]: I0219 05:35:05.217172 5012 scope.go:117] "RemoveContainer" containerID="70dff26f289767b3751863d9c38507087e8b580a75adbd7af49ca49b727a95a9" Feb 19 05:35:05 crc kubenswrapper[5012]: I0219 05:35:05.238186 5012 scope.go:117] "RemoveContainer" containerID="b38c4d760b78c9580d7920d8d103f03ae36a4fb22594d35317c5a0fc8161982d" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.723674 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x"] Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.725701 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.728650 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.740737 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x"] Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.823076 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jfc4\" (UniqueName: \"kubernetes.io/projected/5efec1ed-3f58-4825-a63a-ceb26c38531e-kube-api-access-6jfc4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.823159 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.823207 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.924759 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jfc4\" (UniqueName: \"kubernetes.io/projected/5efec1ed-3f58-4825-a63a-ceb26c38531e-kube-api-access-6jfc4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.924854 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.924899 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.925674 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.926152 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.959879 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jfc4\" (UniqueName: \"kubernetes.io/projected/5efec1ed-3f58-4825-a63a-ceb26c38531e-kube-api-access-6jfc4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:10 crc kubenswrapper[5012]: I0219 05:35:10.048602 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:10 crc kubenswrapper[5012]: E0219 05:35:10.079901 5012 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace_5efec1ed-3f58-4825-a63a-ceb26c38531e_0(d844d21b496239b13ccf49092f4ac1d3b9c1170b2358c57c90d4f29587085cef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 05:35:10 crc kubenswrapper[5012]: E0219 05:35:10.079994 5012 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace_5efec1ed-3f58-4825-a63a-ceb26c38531e_0(d844d21b496239b13ccf49092f4ac1d3b9c1170b2358c57c90d4f29587085cef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:10 crc kubenswrapper[5012]: E0219 05:35:10.080037 5012 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace_5efec1ed-3f58-4825-a63a-ceb26c38531e_0(d844d21b496239b13ccf49092f4ac1d3b9c1170b2358c57c90d4f29587085cef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:10 crc kubenswrapper[5012]: E0219 05:35:10.080105 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace(5efec1ed-3f58-4825-a63a-ceb26c38531e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace(5efec1ed-3f58-4825-a63a-ceb26c38531e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace_5efec1ed-3f58-4825-a63a-ceb26c38531e_0(d844d21b496239b13ccf49092f4ac1d3b9c1170b2358c57c90d4f29587085cef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" Feb 19 05:35:10 crc kubenswrapper[5012]: I0219 05:35:10.703230 5012 scope.go:117] "RemoveContainer" containerID="9dee99959c58361002b098beb811940fb74ac9f7c81b432ebe5142128b4aec05" Feb 19 05:35:10 crc kubenswrapper[5012]: I0219 05:35:10.755992 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:10 crc kubenswrapper[5012]: I0219 05:35:10.756710 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:10 crc kubenswrapper[5012]: E0219 05:35:10.791207 5012 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace_5efec1ed-3f58-4825-a63a-ceb26c38531e_0(0563d3208f9a6bf6111665e5fefaab643ea03021bc30f4a24e9031267dfd98ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 05:35:10 crc kubenswrapper[5012]: E0219 05:35:10.791363 5012 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace_5efec1ed-3f58-4825-a63a-ceb26c38531e_0(0563d3208f9a6bf6111665e5fefaab643ea03021bc30f4a24e9031267dfd98ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:10 crc kubenswrapper[5012]: E0219 05:35:10.791405 5012 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace_5efec1ed-3f58-4825-a63a-ceb26c38531e_0(0563d3208f9a6bf6111665e5fefaab643ea03021bc30f4a24e9031267dfd98ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:10 crc kubenswrapper[5012]: E0219 05:35:10.791496 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace(5efec1ed-3f58-4825-a63a-ceb26c38531e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace(5efec1ed-3f58-4825-a63a-ceb26c38531e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace_5efec1ed-3f58-4825-a63a-ceb26c38531e_0(0563d3208f9a6bf6111665e5fefaab643ea03021bc30f4a24e9031267dfd98ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" Feb 19 05:35:11 crc kubenswrapper[5012]: I0219 05:35:11.766371 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/2.log" Feb 19 05:35:11 crc kubenswrapper[5012]: I0219 05:35:11.767210 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lkrsg" event={"ID":"e7a04e36-fbaa-4de1-871a-7225433eebb0","Type":"ContainerStarted","Data":"28bdabb9481fea6fcdefabcabf2c194ea91f6504df441d4df357f2ecbc2368a6"} Feb 19 05:35:14 crc kubenswrapper[5012]: I0219 05:35:14.268325 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:35:25 crc kubenswrapper[5012]: I0219 05:35:25.702634 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:25 crc kubenswrapper[5012]: I0219 05:35:25.704131 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:25 crc kubenswrapper[5012]: I0219 05:35:25.987149 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x"] Feb 19 05:35:26 crc kubenswrapper[5012]: I0219 05:35:26.879853 5012 generic.go:334] "Generic (PLEG): container finished" podID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerID="9fd43f54ae2cf94f3209642ea3f170f3fd0c1ae027d018b3ee9b11362794b164" exitCode=0 Feb 19 05:35:26 crc kubenswrapper[5012]: I0219 05:35:26.879908 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" event={"ID":"5efec1ed-3f58-4825-a63a-ceb26c38531e","Type":"ContainerDied","Data":"9fd43f54ae2cf94f3209642ea3f170f3fd0c1ae027d018b3ee9b11362794b164"} Feb 19 05:35:26 crc kubenswrapper[5012]: I0219 05:35:26.879940 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" event={"ID":"5efec1ed-3f58-4825-a63a-ceb26c38531e","Type":"ContainerStarted","Data":"2f48fe2eaaee966502a2840e3b0f88622eadd98f5013191e11a04a26705b4519"} Feb 19 05:35:28 crc kubenswrapper[5012]: I0219 05:35:28.896193 5012 generic.go:334] "Generic (PLEG): container finished" podID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerID="db55c4833a96e2ce6ad4aed68a18ae58441ae6680848e7568343d555f935b179" exitCode=0 Feb 19 05:35:28 crc kubenswrapper[5012]: I0219 05:35:28.896331 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" event={"ID":"5efec1ed-3f58-4825-a63a-ceb26c38531e","Type":"ContainerDied","Data":"db55c4833a96e2ce6ad4aed68a18ae58441ae6680848e7568343d555f935b179"} Feb 19 05:35:29 crc kubenswrapper[5012]: I0219 05:35:29.907916 5012 generic.go:334] "Generic (PLEG): container finished" podID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerID="29a03e0bb8c6b3d849b2e2d46284673c3830e2ee6a1ed817d423064737124419" exitCode=0 Feb 19 05:35:29 crc kubenswrapper[5012]: I0219 05:35:29.908151 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" event={"ID":"5efec1ed-3f58-4825-a63a-ceb26c38531e","Type":"ContainerDied","Data":"29a03e0bb8c6b3d849b2e2d46284673c3830e2ee6a1ed817d423064737124419"} Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.220391 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.352218 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-bundle\") pod \"5efec1ed-3f58-4825-a63a-ceb26c38531e\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.352289 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jfc4\" (UniqueName: \"kubernetes.io/projected/5efec1ed-3f58-4825-a63a-ceb26c38531e-kube-api-access-6jfc4\") pod \"5efec1ed-3f58-4825-a63a-ceb26c38531e\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.352375 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-util\") pod \"5efec1ed-3f58-4825-a63a-ceb26c38531e\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.356454 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-bundle" (OuterVolumeSpecName: "bundle") pod "5efec1ed-3f58-4825-a63a-ceb26c38531e" (UID: "5efec1ed-3f58-4825-a63a-ceb26c38531e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.361007 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5efec1ed-3f58-4825-a63a-ceb26c38531e-kube-api-access-6jfc4" (OuterVolumeSpecName: "kube-api-access-6jfc4") pod "5efec1ed-3f58-4825-a63a-ceb26c38531e" (UID: "5efec1ed-3f58-4825-a63a-ceb26c38531e"). InnerVolumeSpecName "kube-api-access-6jfc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.384144 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-util" (OuterVolumeSpecName: "util") pod "5efec1ed-3f58-4825-a63a-ceb26c38531e" (UID: "5efec1ed-3f58-4825-a63a-ceb26c38531e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.455775 5012 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.455822 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jfc4\" (UniqueName: \"kubernetes.io/projected/5efec1ed-3f58-4825-a63a-ceb26c38531e-kube-api-access-6jfc4\") on node \"crc\" DevicePath \"\"" Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.455843 5012 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-util\") on node \"crc\" DevicePath \"\"" Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.926887 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" event={"ID":"5efec1ed-3f58-4825-a63a-ceb26c38531e","Type":"ContainerDied","Data":"2f48fe2eaaee966502a2840e3b0f88622eadd98f5013191e11a04a26705b4519"} Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.926944 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f48fe2eaaee966502a2840e3b0f88622eadd98f5013191e11a04a26705b4519" Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.927052 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.322048 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t"] Feb 19 05:35:42 crc kubenswrapper[5012]: E0219 05:35:42.322927 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerName="util" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.322942 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerName="util" Feb 19 05:35:42 crc kubenswrapper[5012]: E0219 05:35:42.322965 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerName="extract" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.322973 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerName="extract" Feb 19 05:35:42 crc kubenswrapper[5012]: E0219 05:35:42.322990 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerName="pull" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.322998 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerName="pull" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.323112 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerName="extract" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.323584 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.325785 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.326350 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-28dbr" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.327555 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.339375 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.426032 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfmtm\" (UniqueName: \"kubernetes.io/projected/9f3d925a-f08d-4e92-baf3-805f27c9ae35-kube-api-access-zfmtm\") pod \"obo-prometheus-operator-68bc856cb9-9t66t\" (UID: \"9f3d925a-f08d-4e92-baf3-805f27c9ae35\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.476727 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.477801 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.479969 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.481023 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-v9p5h" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.491457 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.511698 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.512406 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.522015 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.527788 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9364b7f3-e3e3-4432-a4e7-4b80c9a50225-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-cddcp\" (UID: \"9364b7f3-e3e3-4432-a4e7-4b80c9a50225\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.527916 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c60bb85-2242-4d9f-95f9-27b2e747727d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-rlcjg\" (UID: \"3c60bb85-2242-4d9f-95f9-27b2e747727d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.527949 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9364b7f3-e3e3-4432-a4e7-4b80c9a50225-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-cddcp\" (UID: \"9364b7f3-e3e3-4432-a4e7-4b80c9a50225\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.527974 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c60bb85-2242-4d9f-95f9-27b2e747727d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-rlcjg\" (UID: \"3c60bb85-2242-4d9f-95f9-27b2e747727d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.528015 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfmtm\" (UniqueName: \"kubernetes.io/projected/9f3d925a-f08d-4e92-baf3-805f27c9ae35-kube-api-access-zfmtm\") pod \"obo-prometheus-operator-68bc856cb9-9t66t\" (UID: \"9f3d925a-f08d-4e92-baf3-805f27c9ae35\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.574680 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfmtm\" (UniqueName: \"kubernetes.io/projected/9f3d925a-f08d-4e92-baf3-805f27c9ae35-kube-api-access-zfmtm\") pod \"obo-prometheus-operator-68bc856cb9-9t66t\" (UID: \"9f3d925a-f08d-4e92-baf3-805f27c9ae35\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.628450 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9364b7f3-e3e3-4432-a4e7-4b80c9a50225-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-cddcp\" (UID: \"9364b7f3-e3e3-4432-a4e7-4b80c9a50225\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.628519 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c60bb85-2242-4d9f-95f9-27b2e747727d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-rlcjg\" (UID: \"3c60bb85-2242-4d9f-95f9-27b2e747727d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.628543 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9364b7f3-e3e3-4432-a4e7-4b80c9a50225-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-cddcp\" (UID: \"9364b7f3-e3e3-4432-a4e7-4b80c9a50225\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.628575 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c60bb85-2242-4d9f-95f9-27b2e747727d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-rlcjg\" (UID: \"3c60bb85-2242-4d9f-95f9-27b2e747727d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.631852 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9364b7f3-e3e3-4432-a4e7-4b80c9a50225-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-cddcp\" (UID: \"9364b7f3-e3e3-4432-a4e7-4b80c9a50225\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.632728 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9364b7f3-e3e3-4432-a4e7-4b80c9a50225-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-cddcp\" (UID: \"9364b7f3-e3e3-4432-a4e7-4b80c9a50225\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.645557 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.645594 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c60bb85-2242-4d9f-95f9-27b2e747727d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-rlcjg\" (UID: \"3c60bb85-2242-4d9f-95f9-27b2e747727d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.647999 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-vw7xl"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.648659 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.655189 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-r5qcs" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.655393 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c60bb85-2242-4d9f-95f9-27b2e747727d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-rlcjg\" (UID: \"3c60bb85-2242-4d9f-95f9-27b2e747727d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.655822 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.661000 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-vw7xl"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.729997 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dkxz\" (UniqueName: \"kubernetes.io/projected/63ee166b-5027-4928-9196-9488685f87d5-kube-api-access-6dkxz\") pod \"observability-operator-59bdc8b94-vw7xl\" (UID: \"63ee166b-5027-4928-9196-9488685f87d5\") " pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.730055 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/63ee166b-5027-4928-9196-9488685f87d5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-vw7xl\" (UID: \"63ee166b-5027-4928-9196-9488685f87d5\") " pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.772060 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5grbr"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.772748 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.775102 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-czkvc" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.785366 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5grbr"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.793647 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.831377 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dkxz\" (UniqueName: \"kubernetes.io/projected/63ee166b-5027-4928-9196-9488685f87d5-kube-api-access-6dkxz\") pod \"observability-operator-59bdc8b94-vw7xl\" (UID: \"63ee166b-5027-4928-9196-9488685f87d5\") " pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.831433 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/63ee166b-5027-4928-9196-9488685f87d5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-vw7xl\" (UID: \"63ee166b-5027-4928-9196-9488685f87d5\") " pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.831474 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/86bcbf15-9553-41af-974c-3418e588e575-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5grbr\" (UID: \"86bcbf15-9553-41af-974c-3418e588e575\") " pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.831496 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd5lg\" (UniqueName: \"kubernetes.io/projected/86bcbf15-9553-41af-974c-3418e588e575-kube-api-access-hd5lg\") pod \"perses-operator-5bf474d74f-5grbr\" (UID: \"86bcbf15-9553-41af-974c-3418e588e575\") " pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.832220 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.848996 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/63ee166b-5027-4928-9196-9488685f87d5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-vw7xl\" (UID: \"63ee166b-5027-4928-9196-9488685f87d5\") " pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.850346 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dkxz\" (UniqueName: \"kubernetes.io/projected/63ee166b-5027-4928-9196-9488685f87d5-kube-api-access-6dkxz\") pod \"observability-operator-59bdc8b94-vw7xl\" (UID: \"63ee166b-5027-4928-9196-9488685f87d5\") " pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.938681 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/86bcbf15-9553-41af-974c-3418e588e575-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5grbr\" (UID: \"86bcbf15-9553-41af-974c-3418e588e575\") " pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.938756 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd5lg\" (UniqueName: \"kubernetes.io/projected/86bcbf15-9553-41af-974c-3418e588e575-kube-api-access-hd5lg\") pod \"perses-operator-5bf474d74f-5grbr\" (UID: \"86bcbf15-9553-41af-974c-3418e588e575\") " pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.939539 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/86bcbf15-9553-41af-974c-3418e588e575-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5grbr\" (UID: \"86bcbf15-9553-41af-974c-3418e588e575\") " pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.958094 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd5lg\" (UniqueName: \"kubernetes.io/projected/86bcbf15-9553-41af-974c-3418e588e575-kube-api-access-hd5lg\") pod \"perses-operator-5bf474d74f-5grbr\" (UID: \"86bcbf15-9553-41af-974c-3418e588e575\") " pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:43 crc kubenswrapper[5012]: I0219 05:35:43.014236 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:43 crc kubenswrapper[5012]: I0219 05:35:43.099159 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:43 crc kubenswrapper[5012]: I0219 05:35:43.176682 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t"] Feb 19 05:35:43 crc kubenswrapper[5012]: I0219 05:35:43.181202 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg"] Feb 19 05:35:43 crc kubenswrapper[5012]: I0219 05:35:43.260342 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-vw7xl"] Feb 19 05:35:43 crc kubenswrapper[5012]: W0219 05:35:43.266837 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63ee166b_5027_4928_9196_9488685f87d5.slice/crio-f0cf2d5dba7517c5301cb01f9524808995d09d3ee5196cb589bec114b2db0f7b WatchSource:0}: Error finding container f0cf2d5dba7517c5301cb01f9524808995d09d3ee5196cb589bec114b2db0f7b: Status 404 returned error can't find the container with id f0cf2d5dba7517c5301cb01f9524808995d09d3ee5196cb589bec114b2db0f7b Feb 19 05:35:43 crc kubenswrapper[5012]: I0219 05:35:43.319111 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp"] Feb 19 05:35:43 crc kubenswrapper[5012]: I0219 05:35:43.330619 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5grbr"] Feb 19 05:35:43 crc kubenswrapper[5012]: I0219 05:35:43.998506 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t" event={"ID":"9f3d925a-f08d-4e92-baf3-805f27c9ae35","Type":"ContainerStarted","Data":"8076f3a76096d678df7cb75d647a683c504432ac387ddb0b69792742e06b83cb"} Feb 19 05:35:44 crc kubenswrapper[5012]: I0219 05:35:44.000003 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" event={"ID":"3c60bb85-2242-4d9f-95f9-27b2e747727d","Type":"ContainerStarted","Data":"decbfafcdffb05c9646234dc88d5c9108df84f29d3a2946013f63bb0104908ff"} Feb 19 05:35:44 crc kubenswrapper[5012]: I0219 05:35:44.001280 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" event={"ID":"63ee166b-5027-4928-9196-9488685f87d5","Type":"ContainerStarted","Data":"f0cf2d5dba7517c5301cb01f9524808995d09d3ee5196cb589bec114b2db0f7b"} Feb 19 05:35:44 crc kubenswrapper[5012]: I0219 05:35:44.002576 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" event={"ID":"9364b7f3-e3e3-4432-a4e7-4b80c9a50225","Type":"ContainerStarted","Data":"e169d28e6f802b174e3c8499da1ef41a5d88851bf8b88dd01bc25b22dcb10dd8"} Feb 19 05:35:44 crc kubenswrapper[5012]: I0219 05:35:44.003650 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-5grbr" event={"ID":"86bcbf15-9553-41af-974c-3418e588e575","Type":"ContainerStarted","Data":"9716acd1be5c14c16b4e3470d339d6f4750bdb4784f8703513acb9355a92e079"} Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.086112 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t" event={"ID":"9f3d925a-f08d-4e92-baf3-805f27c9ae35","Type":"ContainerStarted","Data":"3c7b4b34ad99c637f07f3ff0340c42839989e82c30118f8b2a7ee3c62fe12e84"} Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.088780 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" event={"ID":"3c60bb85-2242-4d9f-95f9-27b2e747727d","Type":"ContainerStarted","Data":"899a5845869dfcfeb04ba83980f09110794eca3bb997776543ade74ecee7195e"} Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.090960 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" event={"ID":"63ee166b-5027-4928-9196-9488685f87d5","Type":"ContainerStarted","Data":"55dba0532de292e84f9dc303c6cb95587b6ad9eb2f0ca1604e680efedae4a4b0"} Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.091593 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.095210 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" event={"ID":"9364b7f3-e3e3-4432-a4e7-4b80c9a50225","Type":"ContainerStarted","Data":"69d4dd462bcfd2e048a40def5db79d35be38edd4f7d97212498635ad6b153f73"} Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.098269 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-5grbr" event={"ID":"86bcbf15-9553-41af-974c-3418e588e575","Type":"ContainerStarted","Data":"c47943eb452a908c828c9e6d64b9e585df12cc83ddf496e5fb1fce96614030ac"} Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.098959 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.125492 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.131417 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t" podStartSLOduration=2.362615518 podStartE2EDuration="12.131389898s" podCreationTimestamp="2026-02-19 05:35:42 +0000 UTC" firstStartedPulling="2026-02-19 05:35:43.216796964 +0000 UTC m=+639.250119533" lastFinishedPulling="2026-02-19 05:35:52.985571304 +0000 UTC m=+649.018893913" observedRunningTime="2026-02-19 05:35:54.115832119 +0000 UTC m=+650.149154718" watchObservedRunningTime="2026-02-19 05:35:54.131389898 +0000 UTC m=+650.164712507" Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.143901 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" podStartSLOduration=2.322943669 podStartE2EDuration="12.143872972s" podCreationTimestamp="2026-02-19 05:35:42 +0000 UTC" firstStartedPulling="2026-02-19 05:35:43.200480246 +0000 UTC m=+639.233802815" lastFinishedPulling="2026-02-19 05:35:53.021409539 +0000 UTC m=+649.054732118" observedRunningTime="2026-02-19 05:35:54.143015061 +0000 UTC m=+650.176337710" watchObservedRunningTime="2026-02-19 05:35:54.143872972 +0000 UTC m=+650.177195581" Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.186778 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" podStartSLOduration=2.4357313879999998 podStartE2EDuration="12.186763526s" podCreationTimestamp="2026-02-19 05:35:42 +0000 UTC" firstStartedPulling="2026-02-19 05:35:43.268937136 +0000 UTC m=+639.302259705" lastFinishedPulling="2026-02-19 05:35:53.019969264 +0000 UTC m=+649.053291843" observedRunningTime="2026-02-19 05:35:54.184781297 +0000 UTC m=+650.218103866" watchObservedRunningTime="2026-02-19 05:35:54.186763526 +0000 UTC m=+650.220086095" Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.204040 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" podStartSLOduration=2.524905173 podStartE2EDuration="12.204016956s" podCreationTimestamp="2026-02-19 05:35:42 +0000 UTC" firstStartedPulling="2026-02-19 05:35:43.310945701 +0000 UTC m=+639.344268270" lastFinishedPulling="2026-02-19 05:35:52.990057474 +0000 UTC m=+649.023380053" observedRunningTime="2026-02-19 05:35:54.202182151 +0000 UTC m=+650.235504730" watchObservedRunningTime="2026-02-19 05:35:54.204016956 +0000 UTC m=+650.237339535" Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.222906 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-5grbr" podStartSLOduration=2.578582561 podStartE2EDuration="12.222884615s" podCreationTimestamp="2026-02-19 05:35:42 +0000 UTC" firstStartedPulling="2026-02-19 05:35:43.342604103 +0000 UTC m=+639.375926662" lastFinishedPulling="2026-02-19 05:35:52.986906127 +0000 UTC m=+649.020228716" observedRunningTime="2026-02-19 05:35:54.222450464 +0000 UTC m=+650.255773273" watchObservedRunningTime="2026-02-19 05:35:54.222884615 +0000 UTC m=+650.256207194" Feb 19 05:36:03 crc kubenswrapper[5012]: I0219 05:36:03.103841 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.454768 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj"] Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.456237 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.460142 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.472947 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj"] Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.588468 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tkl5\" (UniqueName: \"kubernetes.io/projected/6865121b-f9c2-439e-a64a-bf7d94f35797-kube-api-access-7tkl5\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.588922 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.589047 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.690452 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tkl5\" (UniqueName: \"kubernetes.io/projected/6865121b-f9c2-439e-a64a-bf7d94f35797-kube-api-access-7tkl5\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.690541 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.690621 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.691393 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.691496 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.719966 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tkl5\" (UniqueName: \"kubernetes.io/projected/6865121b-f9c2-439e-a64a-bf7d94f35797-kube-api-access-7tkl5\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.782670 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:21 crc kubenswrapper[5012]: I0219 05:36:21.053181 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj"] Feb 19 05:36:21 crc kubenswrapper[5012]: I0219 05:36:21.289332 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" event={"ID":"6865121b-f9c2-439e-a64a-bf7d94f35797","Type":"ContainerStarted","Data":"eee2384060ea0dda1f6a8bbcbf1f6151ab035c791f851aa6cea893c67654a899"} Feb 19 05:36:21 crc kubenswrapper[5012]: I0219 05:36:21.289396 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" event={"ID":"6865121b-f9c2-439e-a64a-bf7d94f35797","Type":"ContainerStarted","Data":"41b8a51b0e4093530db1c36c82a26b069c40f43543a37b05d0d8145db64abbec"} Feb 19 05:36:22 crc kubenswrapper[5012]: I0219 05:36:22.298765 5012 generic.go:334] "Generic (PLEG): container finished" podID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerID="eee2384060ea0dda1f6a8bbcbf1f6151ab035c791f851aa6cea893c67654a899" exitCode=0 Feb 19 05:36:22 crc kubenswrapper[5012]: I0219 05:36:22.298858 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" event={"ID":"6865121b-f9c2-439e-a64a-bf7d94f35797","Type":"ContainerDied","Data":"eee2384060ea0dda1f6a8bbcbf1f6151ab035c791f851aa6cea893c67654a899"} Feb 19 05:36:24 crc kubenswrapper[5012]: I0219 05:36:24.316269 5012 generic.go:334] "Generic (PLEG): container finished" podID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerID="bf2f5f9a6d89a4a351477dea01062227c1f4f678c68a29d37d63976636f6c613" exitCode=0 Feb 19 05:36:24 crc kubenswrapper[5012]: I0219 05:36:24.316374 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" event={"ID":"6865121b-f9c2-439e-a64a-bf7d94f35797","Type":"ContainerDied","Data":"bf2f5f9a6d89a4a351477dea01062227c1f4f678c68a29d37d63976636f6c613"} Feb 19 05:36:25 crc kubenswrapper[5012]: I0219 05:36:25.329498 5012 generic.go:334] "Generic (PLEG): container finished" podID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerID="4f467b3a4163df5642ab20e77f91848f63dec61e4b4430bcf21fe04d298fd6a7" exitCode=0 Feb 19 05:36:25 crc kubenswrapper[5012]: I0219 05:36:25.329574 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" event={"ID":"6865121b-f9c2-439e-a64a-bf7d94f35797","Type":"ContainerDied","Data":"4f467b3a4163df5642ab20e77f91848f63dec61e4b4430bcf21fe04d298fd6a7"} Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.691244 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.773787 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tkl5\" (UniqueName: \"kubernetes.io/projected/6865121b-f9c2-439e-a64a-bf7d94f35797-kube-api-access-7tkl5\") pod \"6865121b-f9c2-439e-a64a-bf7d94f35797\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.773835 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-util\") pod \"6865121b-f9c2-439e-a64a-bf7d94f35797\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.773954 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-bundle\") pod \"6865121b-f9c2-439e-a64a-bf7d94f35797\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.775022 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-bundle" (OuterVolumeSpecName: "bundle") pod "6865121b-f9c2-439e-a64a-bf7d94f35797" (UID: "6865121b-f9c2-439e-a64a-bf7d94f35797"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.784554 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6865121b-f9c2-439e-a64a-bf7d94f35797-kube-api-access-7tkl5" (OuterVolumeSpecName: "kube-api-access-7tkl5") pod "6865121b-f9c2-439e-a64a-bf7d94f35797" (UID: "6865121b-f9c2-439e-a64a-bf7d94f35797"). InnerVolumeSpecName "kube-api-access-7tkl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.797954 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-util" (OuterVolumeSpecName: "util") pod "6865121b-f9c2-439e-a64a-bf7d94f35797" (UID: "6865121b-f9c2-439e-a64a-bf7d94f35797"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.875693 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tkl5\" (UniqueName: \"kubernetes.io/projected/6865121b-f9c2-439e-a64a-bf7d94f35797-kube-api-access-7tkl5\") on node \"crc\" DevicePath \"\"" Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.875743 5012 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-util\") on node \"crc\" DevicePath \"\"" Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.875765 5012 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:36:27 crc kubenswrapper[5012]: I0219 05:36:27.348826 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" event={"ID":"6865121b-f9c2-439e-a64a-bf7d94f35797","Type":"ContainerDied","Data":"41b8a51b0e4093530db1c36c82a26b069c40f43543a37b05d0d8145db64abbec"} Feb 19 05:36:27 crc kubenswrapper[5012]: I0219 05:36:27.348901 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41b8a51b0e4093530db1c36c82a26b069c40f43543a37b05d0d8145db64abbec" Feb 19 05:36:27 crc kubenswrapper[5012]: I0219 05:36:27.348932 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.181567 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-2smgj"] Feb 19 05:36:32 crc kubenswrapper[5012]: E0219 05:36:32.181923 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerName="util" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.181945 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerName="util" Feb 19 05:36:32 crc kubenswrapper[5012]: E0219 05:36:32.181980 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerName="extract" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.181993 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerName="extract" Feb 19 05:36:32 crc kubenswrapper[5012]: E0219 05:36:32.182018 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerName="pull" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.182032 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerName="pull" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.182231 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerName="extract" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.182883 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-2smgj" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.185014 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-g5dkw" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.187398 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.187562 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.197626 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-2smgj"] Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.277725 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c28mk\" (UniqueName: \"kubernetes.io/projected/d6ac1260-4ff8-4025-af6e-35711452ef6f-kube-api-access-c28mk\") pod \"nmstate-operator-694c9596b7-2smgj\" (UID: \"d6ac1260-4ff8-4025-af6e-35711452ef6f\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-2smgj" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.378778 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c28mk\" (UniqueName: \"kubernetes.io/projected/d6ac1260-4ff8-4025-af6e-35711452ef6f-kube-api-access-c28mk\") pod \"nmstate-operator-694c9596b7-2smgj\" (UID: \"d6ac1260-4ff8-4025-af6e-35711452ef6f\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-2smgj" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.404427 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c28mk\" (UniqueName: \"kubernetes.io/projected/d6ac1260-4ff8-4025-af6e-35711452ef6f-kube-api-access-c28mk\") pod \"nmstate-operator-694c9596b7-2smgj\" (UID: \"d6ac1260-4ff8-4025-af6e-35711452ef6f\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-2smgj" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.500959 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-2smgj" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.782871 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-2smgj"] Feb 19 05:36:33 crc kubenswrapper[5012]: I0219 05:36:33.393604 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-2smgj" event={"ID":"d6ac1260-4ff8-4025-af6e-35711452ef6f","Type":"ContainerStarted","Data":"b566bb203443a360511b0257d1a4b867989faad6c3289560d1391e06254cf1be"} Feb 19 05:36:35 crc kubenswrapper[5012]: I0219 05:36:35.417383 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-2smgj" event={"ID":"d6ac1260-4ff8-4025-af6e-35711452ef6f","Type":"ContainerStarted","Data":"743a418753ca9e4577b3915d68bcb88a6f09b8f67d23526601079c9e85323f7c"} Feb 19 05:36:35 crc kubenswrapper[5012]: I0219 05:36:35.444092 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-2smgj" podStartSLOduration=0.979862993 podStartE2EDuration="3.444059026s" podCreationTimestamp="2026-02-19 05:36:32 +0000 UTC" firstStartedPulling="2026-02-19 05:36:32.795274569 +0000 UTC m=+688.828597138" lastFinishedPulling="2026-02-19 05:36:35.259470602 +0000 UTC m=+691.292793171" observedRunningTime="2026-02-19 05:36:35.43394123 +0000 UTC m=+691.467263809" watchObservedRunningTime="2026-02-19 05:36:35.444059026 +0000 UTC m=+691.477381635" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.466569 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-hn274"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.467873 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.471436 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-2zxtm" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.491815 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.492597 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.495147 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.518794 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.521919 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-tdz8p"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.522579 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.543408 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-hn274"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.624915 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzq4v\" (UniqueName: \"kubernetes.io/projected/50749fb3-e43e-4874-a0ea-8dabae225f85-kube-api-access-fzq4v\") pod \"nmstate-webhook-866bcb46dc-mqtfh\" (UID: \"50749fb3-e43e-4874-a0ea-8dabae225f85\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.624975 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/50749fb3-e43e-4874-a0ea-8dabae225f85-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-mqtfh\" (UID: \"50749fb3-e43e-4874-a0ea-8dabae225f85\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.625002 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z84tw\" (UniqueName: \"kubernetes.io/projected/4b5e9e17-84bc-4d05-87f9-328826ea39df-kube-api-access-z84tw\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.625049 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-ovs-socket\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.625068 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-nmstate-lock\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.625101 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-dbus-socket\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.625201 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkhk2\" (UniqueName: \"kubernetes.io/projected/91d45b3f-23b3-4342-8168-667f665ffe82-kube-api-access-xkhk2\") pod \"nmstate-metrics-58c85c668d-hn274\" (UID: \"91d45b3f-23b3-4342-8168-667f665ffe82\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.645054 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.645874 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.650387 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.650384 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.650624 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-r25p2" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.681067 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726377 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/50749fb3-e43e-4874-a0ea-8dabae225f85-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-mqtfh\" (UID: \"50749fb3-e43e-4874-a0ea-8dabae225f85\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726417 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z84tw\" (UniqueName: \"kubernetes.io/projected/4b5e9e17-84bc-4d05-87f9-328826ea39df-kube-api-access-z84tw\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726450 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726483 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-ovs-socket\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726501 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-nmstate-lock\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726521 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqccj\" (UniqueName: \"kubernetes.io/projected/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-kube-api-access-tqccj\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726541 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-dbus-socket\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: E0219 05:36:41.726562 5012 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 19 05:36:41 crc kubenswrapper[5012]: E0219 05:36:41.726635 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50749fb3-e43e-4874-a0ea-8dabae225f85-tls-key-pair podName:50749fb3-e43e-4874-a0ea-8dabae225f85 nodeName:}" failed. No retries permitted until 2026-02-19 05:36:42.226618077 +0000 UTC m=+698.259940636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/50749fb3-e43e-4874-a0ea-8dabae225f85-tls-key-pair") pod "nmstate-webhook-866bcb46dc-mqtfh" (UID: "50749fb3-e43e-4874-a0ea-8dabae225f85") : secret "openshift-nmstate-webhook" not found Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726631 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-nmstate-lock\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726717 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-ovs-socket\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726566 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkhk2\" (UniqueName: \"kubernetes.io/projected/91d45b3f-23b3-4342-8168-667f665ffe82-kube-api-access-xkhk2\") pod \"nmstate-metrics-58c85c668d-hn274\" (UID: \"91d45b3f-23b3-4342-8168-667f665ffe82\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726819 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-dbus-socket\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726945 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzq4v\" (UniqueName: \"kubernetes.io/projected/50749fb3-e43e-4874-a0ea-8dabae225f85-kube-api-access-fzq4v\") pod \"nmstate-webhook-866bcb46dc-mqtfh\" (UID: \"50749fb3-e43e-4874-a0ea-8dabae225f85\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726993 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.754178 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z84tw\" (UniqueName: \"kubernetes.io/projected/4b5e9e17-84bc-4d05-87f9-328826ea39df-kube-api-access-z84tw\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.760078 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzq4v\" (UniqueName: \"kubernetes.io/projected/50749fb3-e43e-4874-a0ea-8dabae225f85-kube-api-access-fzq4v\") pod \"nmstate-webhook-866bcb46dc-mqtfh\" (UID: \"50749fb3-e43e-4874-a0ea-8dabae225f85\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.774949 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkhk2\" (UniqueName: \"kubernetes.io/projected/91d45b3f-23b3-4342-8168-667f665ffe82-kube-api-access-xkhk2\") pod \"nmstate-metrics-58c85c668d-hn274\" (UID: \"91d45b3f-23b3-4342-8168-667f665ffe82\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.786450 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.828575 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqccj\" (UniqueName: \"kubernetes.io/projected/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-kube-api-access-tqccj\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.828700 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.829391 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6c49886887-28b5c"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.829522 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: E0219 05:36:41.829675 5012 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 19 05:36:41 crc kubenswrapper[5012]: E0219 05:36:41.829719 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-plugin-serving-cert podName:0aad4d6c-fc60-4843-b21b-d4ad6d552d5f nodeName:}" failed. No retries permitted until 2026-02-19 05:36:42.329707167 +0000 UTC m=+698.363029726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-zvl62" (UID: "0aad4d6c-fc60-4843-b21b-d4ad6d552d5f") : secret "plugin-serving-cert" not found Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.829609 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.830040 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.839665 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c49886887-28b5c"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.843345 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.853495 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqccj\" (UniqueName: \"kubernetes.io/projected/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-kube-api-access-tqccj\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: W0219 05:36:41.870409 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b5e9e17_84bc_4d05_87f9_328826ea39df.slice/crio-c01bd16b9f9be33458b34ba5f390b2ab5324a437aad2a02893e31162444e0749 WatchSource:0}: Error finding container c01bd16b9f9be33458b34ba5f390b2ab5324a437aad2a02893e31162444e0749: Status 404 returned error can't find the container with id c01bd16b9f9be33458b34ba5f390b2ab5324a437aad2a02893e31162444e0749 Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.930539 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-config\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.930753 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-oauth-config\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.930769 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-service-ca\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.930835 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-oauth-serving-cert\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.930852 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-trusted-ca-bundle\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.930987 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-serving-cert\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.931542 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jmhb\" (UniqueName: \"kubernetes.io/projected/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-kube-api-access-8jmhb\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.038929 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-oauth-serving-cert\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.038959 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-trusted-ca-bundle\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.038977 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-serving-cert\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.039017 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jmhb\" (UniqueName: \"kubernetes.io/projected/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-kube-api-access-8jmhb\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.039041 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-config\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.039058 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-service-ca\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.039072 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-oauth-config\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.040714 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-oauth-serving-cert\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.040836 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-trusted-ca-bundle\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.041419 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-service-ca\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.041536 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-config\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.042186 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-oauth-config\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.043015 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-serving-cert\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.054839 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jmhb\" (UniqueName: \"kubernetes.io/projected/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-kube-api-access-8jmhb\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.170897 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.242377 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-hn274"] Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.242931 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/50749fb3-e43e-4874-a0ea-8dabae225f85-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-mqtfh\" (UID: \"50749fb3-e43e-4874-a0ea-8dabae225f85\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.249792 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/50749fb3-e43e-4874-a0ea-8dabae225f85-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-mqtfh\" (UID: \"50749fb3-e43e-4874-a0ea-8dabae225f85\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.344103 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.351734 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.416639 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.462218 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c49886887-28b5c"] Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.467796 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tdz8p" event={"ID":"4b5e9e17-84bc-4d05-87f9-328826ea39df","Type":"ContainerStarted","Data":"c01bd16b9f9be33458b34ba5f390b2ab5324a437aad2a02893e31162444e0749"} Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.469427 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" event={"ID":"91d45b3f-23b3-4342-8168-667f665ffe82","Type":"ContainerStarted","Data":"9abf528d64ef1cd29e11d2a0fcd35d1c6a7d8e2f88a349bd620e93919a5c704e"} Feb 19 05:36:42 crc kubenswrapper[5012]: W0219 05:36:42.484513 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7609fb75_2a23_43bd_9cbd_6cc14fd4e7d3.slice/crio-cdd2660d1171d3cedd977d709fd7da7e312b89542e7fc09428a7f2b4b0de097d WatchSource:0}: Error finding container cdd2660d1171d3cedd977d709fd7da7e312b89542e7fc09428a7f2b4b0de097d: Status 404 returned error can't find the container with id cdd2660d1171d3cedd977d709fd7da7e312b89542e7fc09428a7f2b4b0de097d Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.564766 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.644822 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh"] Feb 19 05:36:42 crc kubenswrapper[5012]: W0219 05:36:42.671773 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50749fb3_e43e_4874_a0ea_8dabae225f85.slice/crio-c42040a163a7d736ad6c3effbdf415fc53051c1d4bb189c4408a774f08a94ef1 WatchSource:0}: Error finding container c42040a163a7d736ad6c3effbdf415fc53051c1d4bb189c4408a774f08a94ef1: Status 404 returned error can't find the container with id c42040a163a7d736ad6c3effbdf415fc53051c1d4bb189c4408a774f08a94ef1 Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.776857 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62"] Feb 19 05:36:43 crc kubenswrapper[5012]: I0219 05:36:43.476325 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c49886887-28b5c" event={"ID":"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3","Type":"ContainerStarted","Data":"f54e52e0ca8ec5ffb6b3c2bd79452dabb4e05ecc5ad24f67ae5f2adec41dd2c5"} Feb 19 05:36:43 crc kubenswrapper[5012]: I0219 05:36:43.476550 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c49886887-28b5c" event={"ID":"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3","Type":"ContainerStarted","Data":"cdd2660d1171d3cedd977d709fd7da7e312b89542e7fc09428a7f2b4b0de097d"} Feb 19 05:36:43 crc kubenswrapper[5012]: I0219 05:36:43.479464 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" event={"ID":"50749fb3-e43e-4874-a0ea-8dabae225f85","Type":"ContainerStarted","Data":"c42040a163a7d736ad6c3effbdf415fc53051c1d4bb189c4408a774f08a94ef1"} Feb 19 05:36:43 crc kubenswrapper[5012]: I0219 05:36:43.491293 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" event={"ID":"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f","Type":"ContainerStarted","Data":"89b7912ba7d901985484936a1e4bee156a2e0839180714b2e500df56277ee32d"} Feb 19 05:36:43 crc kubenswrapper[5012]: I0219 05:36:43.501527 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6c49886887-28b5c" podStartSLOduration=2.5015138930000003 podStartE2EDuration="2.501513893s" podCreationTimestamp="2026-02-19 05:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:36:43.498141361 +0000 UTC m=+699.531463930" watchObservedRunningTime="2026-02-19 05:36:43.501513893 +0000 UTC m=+699.534836462" Feb 19 05:36:44 crc kubenswrapper[5012]: I0219 05:36:44.430079 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:36:44 crc kubenswrapper[5012]: I0219 05:36:44.430408 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:36:45 crc kubenswrapper[5012]: I0219 05:36:45.508988 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" event={"ID":"50749fb3-e43e-4874-a0ea-8dabae225f85","Type":"ContainerStarted","Data":"4de46efda103b843bc38d058ff6e262f4b74d298b4bf38dbe0e208847b977f1a"} Feb 19 05:36:45 crc kubenswrapper[5012]: I0219 05:36:45.509559 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:45 crc kubenswrapper[5012]: I0219 05:36:45.511131 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tdz8p" event={"ID":"4b5e9e17-84bc-4d05-87f9-328826ea39df","Type":"ContainerStarted","Data":"c16ced7211ef93693e6f299e4ce037de04bf60cf31df271b639ed4210561fc8b"} Feb 19 05:36:45 crc kubenswrapper[5012]: I0219 05:36:45.512044 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:45 crc kubenswrapper[5012]: I0219 05:36:45.513599 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" event={"ID":"91d45b3f-23b3-4342-8168-667f665ffe82","Type":"ContainerStarted","Data":"5f4756ba306c4cb4ce549a37b37941c92f75f51695101de27b400b9213002744"} Feb 19 05:36:45 crc kubenswrapper[5012]: I0219 05:36:45.556612 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" podStartSLOduration=2.641174171 podStartE2EDuration="4.556593038s" podCreationTimestamp="2026-02-19 05:36:41 +0000 UTC" firstStartedPulling="2026-02-19 05:36:42.684619106 +0000 UTC m=+698.717941675" lastFinishedPulling="2026-02-19 05:36:44.600037933 +0000 UTC m=+700.633360542" observedRunningTime="2026-02-19 05:36:45.531635241 +0000 UTC m=+701.564957810" watchObservedRunningTime="2026-02-19 05:36:45.556593038 +0000 UTC m=+701.589915607" Feb 19 05:36:46 crc kubenswrapper[5012]: I0219 05:36:46.521810 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" event={"ID":"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f","Type":"ContainerStarted","Data":"6db0c9354d191485f9fff11c29fd1e9e6ec2db9b0e8c7415c2932b0540d693c1"} Feb 19 05:36:46 crc kubenswrapper[5012]: I0219 05:36:46.540356 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-tdz8p" podStartSLOduration=2.825210211 podStartE2EDuration="5.540337674s" podCreationTimestamp="2026-02-19 05:36:41 +0000 UTC" firstStartedPulling="2026-02-19 05:36:41.873583305 +0000 UTC m=+697.906905874" lastFinishedPulling="2026-02-19 05:36:44.588710728 +0000 UTC m=+700.622033337" observedRunningTime="2026-02-19 05:36:45.55747233 +0000 UTC m=+701.590794969" watchObservedRunningTime="2026-02-19 05:36:46.540337674 +0000 UTC m=+702.573660243" Feb 19 05:36:48 crc kubenswrapper[5012]: I0219 05:36:48.544959 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" event={"ID":"91d45b3f-23b3-4342-8168-667f665ffe82","Type":"ContainerStarted","Data":"f40e3f9611b39d0ed2dd3c5b67666a3f2b2b605b17b23f577f1f84ca3c59c1ff"} Feb 19 05:36:48 crc kubenswrapper[5012]: I0219 05:36:48.572983 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" podStartSLOduration=4.323630056 podStartE2EDuration="7.572954043s" podCreationTimestamp="2026-02-19 05:36:41 +0000 UTC" firstStartedPulling="2026-02-19 05:36:42.806006391 +0000 UTC m=+698.839328970" lastFinishedPulling="2026-02-19 05:36:46.055330358 +0000 UTC m=+702.088652957" observedRunningTime="2026-02-19 05:36:46.544480765 +0000 UTC m=+702.577803334" watchObservedRunningTime="2026-02-19 05:36:48.572954043 +0000 UTC m=+704.606276642" Feb 19 05:36:48 crc kubenswrapper[5012]: I0219 05:36:48.574496 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" podStartSLOduration=2.4698195 podStartE2EDuration="7.57448375s" podCreationTimestamp="2026-02-19 05:36:41 +0000 UTC" firstStartedPulling="2026-02-19 05:36:42.268390154 +0000 UTC m=+698.301712713" lastFinishedPulling="2026-02-19 05:36:47.373054394 +0000 UTC m=+703.406376963" observedRunningTime="2026-02-19 05:36:48.5724386 +0000 UTC m=+704.605761199" watchObservedRunningTime="2026-02-19 05:36:48.57448375 +0000 UTC m=+704.607806359" Feb 19 05:36:51 crc kubenswrapper[5012]: I0219 05:36:51.883610 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:52 crc kubenswrapper[5012]: I0219 05:36:52.171945 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:52 crc kubenswrapper[5012]: I0219 05:36:52.172499 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:52 crc kubenswrapper[5012]: I0219 05:36:52.180038 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:52 crc kubenswrapper[5012]: I0219 05:36:52.593907 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:52 crc kubenswrapper[5012]: I0219 05:36:52.674069 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-mlxbg"] Feb 19 05:37:02 crc kubenswrapper[5012]: I0219 05:37:02.424549 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:37:14 crc kubenswrapper[5012]: I0219 05:37:14.437354 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:37:14 crc kubenswrapper[5012]: I0219 05:37:14.438048 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:37:17 crc kubenswrapper[5012]: I0219 05:37:17.750172 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-mlxbg" podUID="5ff8f20f-5302-4b7a-826c-5d557c65c0f3" containerName="console" containerID="cri-o://cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737" gracePeriod=15 Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.192401 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-mlxbg_5ff8f20f-5302-4b7a-826c-5d557c65c0f3/console/0.log" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.192463 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.368668 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-trusted-ca-bundle\") pod \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.368976 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-service-ca\") pod \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.369075 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-oauth-serving-cert\") pod \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.369171 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-config\") pod \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.369294 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-serving-cert\") pod \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.369417 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzxsb\" (UniqueName: \"kubernetes.io/projected/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-kube-api-access-dzxsb\") pod \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.369520 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-oauth-config\") pod \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.370071 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-service-ca" (OuterVolumeSpecName: "service-ca") pod "5ff8f20f-5302-4b7a-826c-5d557c65c0f3" (UID: "5ff8f20f-5302-4b7a-826c-5d557c65c0f3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.370381 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5ff8f20f-5302-4b7a-826c-5d557c65c0f3" (UID: "5ff8f20f-5302-4b7a-826c-5d557c65c0f3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.370602 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-config" (OuterVolumeSpecName: "console-config") pod "5ff8f20f-5302-4b7a-826c-5d557c65c0f3" (UID: "5ff8f20f-5302-4b7a-826c-5d557c65c0f3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.370952 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5ff8f20f-5302-4b7a-826c-5d557c65c0f3" (UID: "5ff8f20f-5302-4b7a-826c-5d557c65c0f3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.376084 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5ff8f20f-5302-4b7a-826c-5d557c65c0f3" (UID: "5ff8f20f-5302-4b7a-826c-5d557c65c0f3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.379107 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-kube-api-access-dzxsb" (OuterVolumeSpecName: "kube-api-access-dzxsb") pod "5ff8f20f-5302-4b7a-826c-5d557c65c0f3" (UID: "5ff8f20f-5302-4b7a-826c-5d557c65c0f3"). InnerVolumeSpecName "kube-api-access-dzxsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.380479 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5ff8f20f-5302-4b7a-826c-5d557c65c0f3" (UID: "5ff8f20f-5302-4b7a-826c-5d557c65c0f3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.470698 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.470747 5012 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.470756 5012 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.470764 5012 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.470772 5012 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.470782 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzxsb\" (UniqueName: \"kubernetes.io/projected/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-kube-api-access-dzxsb\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.470794 5012 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.794732 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-mlxbg_5ff8f20f-5302-4b7a-826c-5d557c65c0f3/console/0.log" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.794792 5012 generic.go:334] "Generic (PLEG): container finished" podID="5ff8f20f-5302-4b7a-826c-5d557c65c0f3" containerID="cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737" exitCode=2 Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.794824 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mlxbg" event={"ID":"5ff8f20f-5302-4b7a-826c-5d557c65c0f3","Type":"ContainerDied","Data":"cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737"} Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.794852 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mlxbg" event={"ID":"5ff8f20f-5302-4b7a-826c-5d557c65c0f3","Type":"ContainerDied","Data":"a6f2569260b6928a746b0541013161dd385ea0ab1aad5d9524e6efae3299b362"} Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.794869 5012 scope.go:117] "RemoveContainer" containerID="cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.794918 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.814647 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-mlxbg"] Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.818879 5012 scope.go:117] "RemoveContainer" containerID="cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.819222 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-mlxbg"] Feb 19 05:37:18 crc kubenswrapper[5012]: E0219 05:37:18.819369 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737\": container with ID starting with cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737 not found: ID does not exist" containerID="cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.819416 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737"} err="failed to get container status \"cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737\": rpc error: code = NotFound desc = could not find container \"cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737\": container with ID starting with cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737 not found: ID does not exist" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.277654 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf"] Feb 19 05:37:19 crc kubenswrapper[5012]: E0219 05:37:19.278016 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff8f20f-5302-4b7a-826c-5d557c65c0f3" containerName="console" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.278041 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff8f20f-5302-4b7a-826c-5d557c65c0f3" containerName="console" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.278228 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff8f20f-5302-4b7a-826c-5d557c65c0f3" containerName="console" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.279610 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.283123 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.290793 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf"] Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.384543 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.384606 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.384873 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwn4s\" (UniqueName: \"kubernetes.io/projected/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-kube-api-access-fwn4s\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.486690 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.486765 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.486867 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwn4s\" (UniqueName: \"kubernetes.io/projected/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-kube-api-access-fwn4s\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.487263 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.487811 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.525369 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwn4s\" (UniqueName: \"kubernetes.io/projected/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-kube-api-access-fwn4s\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.614616 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.879898 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf"] Feb 19 05:37:20 crc kubenswrapper[5012]: I0219 05:37:20.715264 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ff8f20f-5302-4b7a-826c-5d557c65c0f3" path="/var/lib/kubelet/pods/5ff8f20f-5302-4b7a-826c-5d557c65c0f3/volumes" Feb 19 05:37:20 crc kubenswrapper[5012]: I0219 05:37:20.818183 5012 generic.go:334] "Generic (PLEG): container finished" podID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerID="aadfaec376e2f374a0e410d704abfdd0041449e55a2f0296a55c1ca8809f871a" exitCode=0 Feb 19 05:37:20 crc kubenswrapper[5012]: I0219 05:37:20.818369 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" event={"ID":"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778","Type":"ContainerDied","Data":"aadfaec376e2f374a0e410d704abfdd0041449e55a2f0296a55c1ca8809f871a"} Feb 19 05:37:20 crc kubenswrapper[5012]: I0219 05:37:20.818867 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" event={"ID":"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778","Type":"ContainerStarted","Data":"532effff0272a6ed4a1427a3800addb685987ce9a23630f3d1b4f93cbcb8aa92"} Feb 19 05:37:22 crc kubenswrapper[5012]: I0219 05:37:22.860795 5012 generic.go:334] "Generic (PLEG): container finished" podID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerID="f862f54b2a8ea948ed3610af72b7b7d0c81d24236c009cd6cacf5e25e5e6fa5e" exitCode=0 Feb 19 05:37:22 crc kubenswrapper[5012]: I0219 05:37:22.860923 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" event={"ID":"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778","Type":"ContainerDied","Data":"f862f54b2a8ea948ed3610af72b7b7d0c81d24236c009cd6cacf5e25e5e6fa5e"} Feb 19 05:37:23 crc kubenswrapper[5012]: I0219 05:37:23.906673 5012 generic.go:334] "Generic (PLEG): container finished" podID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerID="cad0773f4b16d786fc1e16f199c86852cd659377043048ad5d5ae36732edd2af" exitCode=0 Feb 19 05:37:23 crc kubenswrapper[5012]: I0219 05:37:23.906734 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" event={"ID":"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778","Type":"ContainerDied","Data":"cad0773f4b16d786fc1e16f199c86852cd659377043048ad5d5ae36732edd2af"} Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.301166 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.472698 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwn4s\" (UniqueName: \"kubernetes.io/projected/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-kube-api-access-fwn4s\") pod \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.472849 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-bundle\") pod \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.472925 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-util\") pod \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.474240 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-bundle" (OuterVolumeSpecName: "bundle") pod "ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" (UID: "ee5d7005-f5b3-4a68-8ae6-e74db1bd0778"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.485154 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-kube-api-access-fwn4s" (OuterVolumeSpecName: "kube-api-access-fwn4s") pod "ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" (UID: "ee5d7005-f5b3-4a68-8ae6-e74db1bd0778"). InnerVolumeSpecName "kube-api-access-fwn4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.572644 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-util" (OuterVolumeSpecName: "util") pod "ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" (UID: "ee5d7005-f5b3-4a68-8ae6-e74db1bd0778"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.575445 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwn4s\" (UniqueName: \"kubernetes.io/projected/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-kube-api-access-fwn4s\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.575505 5012 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.576191 5012 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-util\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.931743 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" event={"ID":"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778","Type":"ContainerDied","Data":"532effff0272a6ed4a1427a3800addb685987ce9a23630f3d1b4f93cbcb8aa92"} Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.931803 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="532effff0272a6ed4a1427a3800addb685987ce9a23630f3d1b4f93cbcb8aa92" Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.931845 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.802905 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj"] Feb 19 05:37:36 crc kubenswrapper[5012]: E0219 05:37:36.803759 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerName="pull" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.803773 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerName="pull" Feb 19 05:37:36 crc kubenswrapper[5012]: E0219 05:37:36.803793 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerName="util" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.803801 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerName="util" Feb 19 05:37:36 crc kubenswrapper[5012]: E0219 05:37:36.803816 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerName="extract" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.803824 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerName="extract" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.803954 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerName="extract" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.804463 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.807757 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.807969 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.808151 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gzbbd" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.809534 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.811727 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.819703 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj"] Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.931085 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-apiservice-cert\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.931262 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hssqx\" (UniqueName: \"kubernetes.io/projected/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-kube-api-access-hssqx\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.931374 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-webhook-cert\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.032107 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-webhook-cert\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.032171 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-apiservice-cert\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.032211 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hssqx\" (UniqueName: \"kubernetes.io/projected/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-kube-api-access-hssqx\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.037989 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-webhook-cert\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.038716 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-apiservice-cert\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.060121 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hssqx\" (UniqueName: \"kubernetes.io/projected/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-kube-api-access-hssqx\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.118431 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.142895 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74"] Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.143561 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.145928 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.145947 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.146641 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-wwzlh" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.158992 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74"] Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.234790 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79hx9\" (UniqueName: \"kubernetes.io/projected/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-kube-api-access-79hx9\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.235093 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-webhook-cert\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.235126 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-apiservice-cert\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.335884 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-apiservice-cert\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.335956 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79hx9\" (UniqueName: \"kubernetes.io/projected/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-kube-api-access-79hx9\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.335999 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-webhook-cert\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.339392 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-apiservice-cert\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.339659 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-webhook-cert\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.352057 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79hx9\" (UniqueName: \"kubernetes.io/projected/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-kube-api-access-79hx9\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.486353 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.660829 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj"] Feb 19 05:37:37 crc kubenswrapper[5012]: W0219 05:37:37.669476 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05b78fff_bf4d_4cd6_aba9_b74303a5dd50.slice/crio-db33b2497cfe8e6b4ca0d70590f1535173dd90537b73e610e60b2787be9e73cc WatchSource:0}: Error finding container db33b2497cfe8e6b4ca0d70590f1535173dd90537b73e610e60b2787be9e73cc: Status 404 returned error can't find the container with id db33b2497cfe8e6b4ca0d70590f1535173dd90537b73e610e60b2787be9e73cc Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.761860 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74"] Feb 19 05:37:37 crc kubenswrapper[5012]: W0219 05:37:37.772038 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec7fdada_6f6e_4d8b_b2e1_c944050c714c.slice/crio-d780fe048cc2aa7a52e416bfdb029ca75146724ef3048189b1a29993c260336d WatchSource:0}: Error finding container d780fe048cc2aa7a52e416bfdb029ca75146724ef3048189b1a29993c260336d: Status 404 returned error can't find the container with id d780fe048cc2aa7a52e416bfdb029ca75146724ef3048189b1a29993c260336d Feb 19 05:37:38 crc kubenswrapper[5012]: I0219 05:37:38.007449 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" event={"ID":"ec7fdada-6f6e-4d8b-b2e1-c944050c714c","Type":"ContainerStarted","Data":"d780fe048cc2aa7a52e416bfdb029ca75146724ef3048189b1a29993c260336d"} Feb 19 05:37:38 crc kubenswrapper[5012]: I0219 05:37:38.008563 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" event={"ID":"05b78fff-bf4d-4cd6-aba9-b74303a5dd50","Type":"ContainerStarted","Data":"db33b2497cfe8e6b4ca0d70590f1535173dd90537b73e610e60b2787be9e73cc"} Feb 19 05:37:40 crc kubenswrapper[5012]: I0219 05:37:40.338940 5012 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 05:37:41 crc kubenswrapper[5012]: I0219 05:37:41.077540 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" event={"ID":"05b78fff-bf4d-4cd6-aba9-b74303a5dd50","Type":"ContainerStarted","Data":"b44e1fe3a02a35d922aad5e0d8f95c3d5ff220e4b2b34a031561c4658bb70611"} Feb 19 05:37:41 crc kubenswrapper[5012]: I0219 05:37:41.077834 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:41 crc kubenswrapper[5012]: I0219 05:37:41.102856 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" podStartSLOduration=1.968173197 podStartE2EDuration="5.102837532s" podCreationTimestamp="2026-02-19 05:37:36 +0000 UTC" firstStartedPulling="2026-02-19 05:37:37.671247149 +0000 UTC m=+753.704569728" lastFinishedPulling="2026-02-19 05:37:40.805911494 +0000 UTC m=+756.839234063" observedRunningTime="2026-02-19 05:37:41.09539801 +0000 UTC m=+757.128720629" watchObservedRunningTime="2026-02-19 05:37:41.102837532 +0000 UTC m=+757.136160101" Feb 19 05:37:43 crc kubenswrapper[5012]: I0219 05:37:43.093610 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" event={"ID":"ec7fdada-6f6e-4d8b-b2e1-c944050c714c","Type":"ContainerStarted","Data":"496af1f7a9d08f212da3074df25922935e599f28b8fe04441d505db64054be82"} Feb 19 05:37:43 crc kubenswrapper[5012]: I0219 05:37:43.094018 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:43 crc kubenswrapper[5012]: I0219 05:37:43.123986 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" podStartSLOduration=1.354981662 podStartE2EDuration="6.12396552s" podCreationTimestamp="2026-02-19 05:37:37 +0000 UTC" firstStartedPulling="2026-02-19 05:37:37.77521246 +0000 UTC m=+753.808535029" lastFinishedPulling="2026-02-19 05:37:42.544196318 +0000 UTC m=+758.577518887" observedRunningTime="2026-02-19 05:37:43.117209695 +0000 UTC m=+759.150532284" watchObservedRunningTime="2026-02-19 05:37:43.12396552 +0000 UTC m=+759.157288089" Feb 19 05:37:44 crc kubenswrapper[5012]: I0219 05:37:44.431216 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:37:44 crc kubenswrapper[5012]: I0219 05:37:44.431746 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:37:44 crc kubenswrapper[5012]: I0219 05:37:44.431814 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:37:44 crc kubenswrapper[5012]: I0219 05:37:44.433063 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2fa30f17f6fec33303fdb3b3cb4c275384acd11d008a1c182ee7a051d5288089"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:37:44 crc kubenswrapper[5012]: I0219 05:37:44.433214 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://2fa30f17f6fec33303fdb3b3cb4c275384acd11d008a1c182ee7a051d5288089" gracePeriod=600 Feb 19 05:37:45 crc kubenswrapper[5012]: I0219 05:37:45.109976 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="2fa30f17f6fec33303fdb3b3cb4c275384acd11d008a1c182ee7a051d5288089" exitCode=0 Feb 19 05:37:45 crc kubenswrapper[5012]: I0219 05:37:45.110439 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"2fa30f17f6fec33303fdb3b3cb4c275384acd11d008a1c182ee7a051d5288089"} Feb 19 05:37:45 crc kubenswrapper[5012]: I0219 05:37:45.110507 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"f6b4f2485162f8c24d6693d845318234656e6a8c97d49d2e72f4427654fa319a"} Feb 19 05:37:45 crc kubenswrapper[5012]: I0219 05:37:45.110530 5012 scope.go:117] "RemoveContainer" containerID="8431b8eb7363f7603ff116fd5d3f9ab3ed3f378fbd36db4efaaa1521cb246ddd" Feb 19 05:37:50 crc kubenswrapper[5012]: I0219 05:37:50.926111 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vd2gr"] Feb 19 05:37:50 crc kubenswrapper[5012]: I0219 05:37:50.927528 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:50 crc kubenswrapper[5012]: I0219 05:37:50.945551 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vd2gr"] Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.080724 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5bxp\" (UniqueName: \"kubernetes.io/projected/36832a35-ae82-46eb-89dd-9e1a1a58fca1-kube-api-access-x5bxp\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.081144 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-utilities\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.081286 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-catalog-content\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.182813 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5bxp\" (UniqueName: \"kubernetes.io/projected/36832a35-ae82-46eb-89dd-9e1a1a58fca1-kube-api-access-x5bxp\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.182916 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-utilities\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.182954 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-catalog-content\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.183768 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-catalog-content\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.183784 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-utilities\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.212562 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5bxp\" (UniqueName: \"kubernetes.io/projected/36832a35-ae82-46eb-89dd-9e1a1a58fca1-kube-api-access-x5bxp\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.289590 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.732381 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vd2gr"] Feb 19 05:37:52 crc kubenswrapper[5012]: I0219 05:37:52.181361 5012 generic.go:334] "Generic (PLEG): container finished" podID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerID="e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395" exitCode=0 Feb 19 05:37:52 crc kubenswrapper[5012]: I0219 05:37:52.181614 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd2gr" event={"ID":"36832a35-ae82-46eb-89dd-9e1a1a58fca1","Type":"ContainerDied","Data":"e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395"} Feb 19 05:37:52 crc kubenswrapper[5012]: I0219 05:37:52.181639 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd2gr" event={"ID":"36832a35-ae82-46eb-89dd-9e1a1a58fca1","Type":"ContainerStarted","Data":"12d34e10928c2dfbd4f6f549b88f76696b6d8930975e76bce01f89d79536c334"} Feb 19 05:37:54 crc kubenswrapper[5012]: I0219 05:37:54.197727 5012 generic.go:334] "Generic (PLEG): container finished" podID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerID="1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9" exitCode=0 Feb 19 05:37:54 crc kubenswrapper[5012]: I0219 05:37:54.197780 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd2gr" event={"ID":"36832a35-ae82-46eb-89dd-9e1a1a58fca1","Type":"ContainerDied","Data":"1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9"} Feb 19 05:37:55 crc kubenswrapper[5012]: I0219 05:37:55.208616 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd2gr" event={"ID":"36832a35-ae82-46eb-89dd-9e1a1a58fca1","Type":"ContainerStarted","Data":"05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c"} Feb 19 05:37:55 crc kubenswrapper[5012]: I0219 05:37:55.229491 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vd2gr" podStartSLOduration=2.550997324 podStartE2EDuration="5.229451053s" podCreationTimestamp="2026-02-19 05:37:50 +0000 UTC" firstStartedPulling="2026-02-19 05:37:52.183032117 +0000 UTC m=+768.216354686" lastFinishedPulling="2026-02-19 05:37:54.861485836 +0000 UTC m=+770.894808415" observedRunningTime="2026-02-19 05:37:55.22646012 +0000 UTC m=+771.259782729" watchObservedRunningTime="2026-02-19 05:37:55.229451053 +0000 UTC m=+771.262773632" Feb 19 05:37:57 crc kubenswrapper[5012]: I0219 05:37:57.491722 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:38:01 crc kubenswrapper[5012]: I0219 05:38:01.290125 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:38:01 crc kubenswrapper[5012]: I0219 05:38:01.290481 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:38:02 crc kubenswrapper[5012]: I0219 05:38:02.348892 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vd2gr" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="registry-server" probeResult="failure" output=< Feb 19 05:38:02 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 05:38:02 crc kubenswrapper[5012]: > Feb 19 05:38:11 crc kubenswrapper[5012]: I0219 05:38:11.362684 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:38:11 crc kubenswrapper[5012]: I0219 05:38:11.427726 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:38:11 crc kubenswrapper[5012]: I0219 05:38:11.611276 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vd2gr"] Feb 19 05:38:13 crc kubenswrapper[5012]: I0219 05:38:13.356411 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vd2gr" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="registry-server" containerID="cri-o://05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c" gracePeriod=2 Feb 19 05:38:13 crc kubenswrapper[5012]: I0219 05:38:13.880433 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.034517 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5bxp\" (UniqueName: \"kubernetes.io/projected/36832a35-ae82-46eb-89dd-9e1a1a58fca1-kube-api-access-x5bxp\") pod \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.034578 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-catalog-content\") pod \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.034633 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-utilities\") pod \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.035771 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-utilities" (OuterVolumeSpecName: "utilities") pod "36832a35-ae82-46eb-89dd-9e1a1a58fca1" (UID: "36832a35-ae82-46eb-89dd-9e1a1a58fca1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.043508 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36832a35-ae82-46eb-89dd-9e1a1a58fca1-kube-api-access-x5bxp" (OuterVolumeSpecName: "kube-api-access-x5bxp") pod "36832a35-ae82-46eb-89dd-9e1a1a58fca1" (UID: "36832a35-ae82-46eb-89dd-9e1a1a58fca1"). InnerVolumeSpecName "kube-api-access-x5bxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.135988 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5bxp\" (UniqueName: \"kubernetes.io/projected/36832a35-ae82-46eb-89dd-9e1a1a58fca1-kube-api-access-x5bxp\") on node \"crc\" DevicePath \"\"" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.136020 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.175556 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36832a35-ae82-46eb-89dd-9e1a1a58fca1" (UID: "36832a35-ae82-46eb-89dd-9e1a1a58fca1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.237345 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.368709 5012 generic.go:334] "Generic (PLEG): container finished" podID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerID="05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c" exitCode=0 Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.368776 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd2gr" event={"ID":"36832a35-ae82-46eb-89dd-9e1a1a58fca1","Type":"ContainerDied","Data":"05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c"} Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.368838 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.368866 5012 scope.go:117] "RemoveContainer" containerID="05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.368846 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd2gr" event={"ID":"36832a35-ae82-46eb-89dd-9e1a1a58fca1","Type":"ContainerDied","Data":"12d34e10928c2dfbd4f6f549b88f76696b6d8930975e76bce01f89d79536c334"} Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.397661 5012 scope.go:117] "RemoveContainer" containerID="1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.422367 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vd2gr"] Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.430671 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vd2gr"] Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.441118 5012 scope.go:117] "RemoveContainer" containerID="e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.471375 5012 scope.go:117] "RemoveContainer" containerID="05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c" Feb 19 05:38:14 crc kubenswrapper[5012]: E0219 05:38:14.471948 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c\": container with ID starting with 05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c not found: ID does not exist" containerID="05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.472001 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c"} err="failed to get container status \"05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c\": rpc error: code = NotFound desc = could not find container \"05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c\": container with ID starting with 05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c not found: ID does not exist" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.472049 5012 scope.go:117] "RemoveContainer" containerID="1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9" Feb 19 05:38:14 crc kubenswrapper[5012]: E0219 05:38:14.473158 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9\": container with ID starting with 1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9 not found: ID does not exist" containerID="1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.473257 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9"} err="failed to get container status \"1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9\": rpc error: code = NotFound desc = could not find container \"1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9\": container with ID starting with 1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9 not found: ID does not exist" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.473339 5012 scope.go:117] "RemoveContainer" containerID="e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395" Feb 19 05:38:14 crc kubenswrapper[5012]: E0219 05:38:14.473978 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395\": container with ID starting with e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395 not found: ID does not exist" containerID="e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.474020 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395"} err="failed to get container status \"e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395\": rpc error: code = NotFound desc = could not find container \"e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395\": container with ID starting with e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395 not found: ID does not exist" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.716595 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" path="/var/lib/kubelet/pods/36832a35-ae82-46eb-89dd-9e1a1a58fca1/volumes" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.121946 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.890886 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4d76m"] Feb 19 05:38:17 crc kubenswrapper[5012]: E0219 05:38:17.891428 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="extract-content" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.891460 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="extract-content" Feb 19 05:38:17 crc kubenswrapper[5012]: E0219 05:38:17.891480 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="registry-server" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.891494 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="registry-server" Feb 19 05:38:17 crc kubenswrapper[5012]: E0219 05:38:17.891521 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="extract-utilities" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.891535 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="extract-utilities" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.891766 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="registry-server" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.900018 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84"] Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.900739 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.901039 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.905036 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.905244 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.905543 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-clrpz" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.905757 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.924820 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84"] Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.982289 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-87ct4"] Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.983747 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-87ct4" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.987234 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-pdjcn" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.987548 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.987748 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.988866 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.990528 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-c4jbq"] Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.991513 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.992604 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006201 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vmqw\" (UniqueName: \"kubernetes.io/projected/48b2548c-eb36-4c42-a84f-2d3f2084a46f-kube-api-access-7vmqw\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006240 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-sockets\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006265 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-cert\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006287 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/82cb6684-3937-45f8-9f18-56940e88f480-metallb-excludel2\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006319 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4882d\" (UniqueName: \"kubernetes.io/projected/82cb6684-3937-45f8-9f18-56940e88f480-kube-api-access-4882d\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006335 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8lbv\" (UniqueName: \"kubernetes.io/projected/fe949ecf-1cb7-47c7-b196-d4851f142c5f-kube-api-access-v8lbv\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006356 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/431a9bf4-479e-4255-9664-554c80fa4376-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-hdb84\" (UID: \"431a9bf4-479e-4255-9664-554c80fa4376\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006376 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlfvs\" (UniqueName: \"kubernetes.io/projected/431a9bf4-479e-4255-9664-554c80fa4376-kube-api-access-jlfvs\") pod \"frr-k8s-webhook-server-78b44bf5bb-hdb84\" (UID: \"431a9bf4-479e-4255-9664-554c80fa4376\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006397 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006413 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-conf\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006434 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-metrics-certs\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006465 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-startup\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006482 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-reloader\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006515 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-metrics-certs\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006531 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006549 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics-certs\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.008417 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-c4jbq"] Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107538 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-metrics-certs\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107592 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107613 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics-certs\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107640 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vmqw\" (UniqueName: \"kubernetes.io/projected/48b2548c-eb36-4c42-a84f-2d3f2084a46f-kube-api-access-7vmqw\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107680 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-sockets\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107711 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-cert\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107737 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4882d\" (UniqueName: \"kubernetes.io/projected/82cb6684-3937-45f8-9f18-56940e88f480-kube-api-access-4882d\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107755 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/82cb6684-3937-45f8-9f18-56940e88f480-metallb-excludel2\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107780 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8lbv\" (UniqueName: \"kubernetes.io/projected/fe949ecf-1cb7-47c7-b196-d4851f142c5f-kube-api-access-v8lbv\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107812 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/431a9bf4-479e-4255-9664-554c80fa4376-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-hdb84\" (UID: \"431a9bf4-479e-4255-9664-554c80fa4376\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:18 crc kubenswrapper[5012]: E0219 05:38:18.107811 5012 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107837 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlfvs\" (UniqueName: \"kubernetes.io/projected/431a9bf4-479e-4255-9664-554c80fa4376-kube-api-access-jlfvs\") pod \"frr-k8s-webhook-server-78b44bf5bb-hdb84\" (UID: \"431a9bf4-479e-4255-9664-554c80fa4376\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107868 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: E0219 05:38:18.107905 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist podName:82cb6684-3937-45f8-9f18-56940e88f480 nodeName:}" failed. No retries permitted until 2026-02-19 05:38:18.607878983 +0000 UTC m=+794.641201552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist") pod "speaker-87ct4" (UID: "82cb6684-3937-45f8-9f18-56940e88f480") : secret "metallb-memberlist" not found Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107942 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-conf\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.108026 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-metrics-certs\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.108049 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-startup\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.108068 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-reloader\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.108277 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: E0219 05:38:18.108485 5012 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 19 05:38:18 crc kubenswrapper[5012]: E0219 05:38:18.108606 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics-certs podName:48b2548c-eb36-4c42-a84f-2d3f2084a46f nodeName:}" failed. No retries permitted until 2026-02-19 05:38:18.60857688 +0000 UTC m=+794.641899489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics-certs") pod "frr-k8s-4d76m" (UID: "48b2548c-eb36-4c42-a84f-2d3f2084a46f") : secret "frr-k8s-certs-secret" not found Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.108683 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-sockets\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: E0219 05:38:18.108686 5012 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 19 05:38:18 crc kubenswrapper[5012]: E0219 05:38:18.108753 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-metrics-certs podName:fe949ecf-1cb7-47c7-b196-d4851f142c5f nodeName:}" failed. No retries permitted until 2026-02-19 05:38:18.608742184 +0000 UTC m=+794.642064753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-metrics-certs") pod "controller-69bbfbf88f-c4jbq" (UID: "fe949ecf-1cb7-47c7-b196-d4851f142c5f") : secret "controller-certs-secret" not found Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.109010 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-conf\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.109048 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-reloader\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.109112 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/82cb6684-3937-45f8-9f18-56940e88f480-metallb-excludel2\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.109461 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-startup\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.112754 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.114698 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-metrics-certs\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.116787 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/431a9bf4-479e-4255-9664-554c80fa4376-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-hdb84\" (UID: \"431a9bf4-479e-4255-9664-554c80fa4376\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.121612 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-cert\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.125985 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vmqw\" (UniqueName: \"kubernetes.io/projected/48b2548c-eb36-4c42-a84f-2d3f2084a46f-kube-api-access-7vmqw\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.129015 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8lbv\" (UniqueName: \"kubernetes.io/projected/fe949ecf-1cb7-47c7-b196-d4851f142c5f-kube-api-access-v8lbv\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.131348 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4882d\" (UniqueName: \"kubernetes.io/projected/82cb6684-3937-45f8-9f18-56940e88f480-kube-api-access-4882d\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.132638 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlfvs\" (UniqueName: \"kubernetes.io/projected/431a9bf4-479e-4255-9664-554c80fa4376-kube-api-access-jlfvs\") pod \"frr-k8s-webhook-server-78b44bf5bb-hdb84\" (UID: \"431a9bf4-479e-4255-9664-554c80fa4376\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.232640 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.453041 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84"] Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.612577 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-metrics-certs\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.613032 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.613052 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics-certs\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: E0219 05:38:18.613887 5012 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 05:38:18 crc kubenswrapper[5012]: E0219 05:38:18.614072 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist podName:82cb6684-3937-45f8-9f18-56940e88f480 nodeName:}" failed. No retries permitted until 2026-02-19 05:38:19.614031364 +0000 UTC m=+795.647353963 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist") pod "speaker-87ct4" (UID: "82cb6684-3937-45f8-9f18-56940e88f480") : secret "metallb-memberlist" not found Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.621358 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-metrics-certs\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.621587 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics-certs\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.823571 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.913892 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:19 crc kubenswrapper[5012]: I0219 05:38:19.440224 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerStarted","Data":"d9f3573276cb5e4a080d5a4701db3de93e1051e5801d64937ca8e0c702fc27bb"} Feb 19 05:38:19 crc kubenswrapper[5012]: I0219 05:38:19.441166 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" event={"ID":"431a9bf4-479e-4255-9664-554c80fa4376","Type":"ContainerStarted","Data":"660ea8a59313a5f500662062a7161875c8a8cdb9f34620a12910f8f57a04caa8"} Feb 19 05:38:19 crc kubenswrapper[5012]: I0219 05:38:19.484790 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-c4jbq"] Feb 19 05:38:19 crc kubenswrapper[5012]: I0219 05:38:19.629042 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:19 crc kubenswrapper[5012]: E0219 05:38:19.629392 5012 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 05:38:19 crc kubenswrapper[5012]: E0219 05:38:19.629512 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist podName:82cb6684-3937-45f8-9f18-56940e88f480 nodeName:}" failed. No retries permitted until 2026-02-19 05:38:21.629484591 +0000 UTC m=+797.662807200 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist") pod "speaker-87ct4" (UID: "82cb6684-3937-45f8-9f18-56940e88f480") : secret "metallb-memberlist" not found Feb 19 05:38:20 crc kubenswrapper[5012]: I0219 05:38:20.460854 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-c4jbq" event={"ID":"fe949ecf-1cb7-47c7-b196-d4851f142c5f","Type":"ContainerStarted","Data":"0d3df4829290d2c587ab8aa88f9b2bceb6740e2693ceec4a59b5bf62f38e40b7"} Feb 19 05:38:20 crc kubenswrapper[5012]: I0219 05:38:20.460897 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-c4jbq" event={"ID":"fe949ecf-1cb7-47c7-b196-d4851f142c5f","Type":"ContainerStarted","Data":"07ef5bbedac13bf3c83e38b89ace08f3941c8e5d8ed16da0452b817d5d954270"} Feb 19 05:38:20 crc kubenswrapper[5012]: I0219 05:38:20.460909 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-c4jbq" event={"ID":"fe949ecf-1cb7-47c7-b196-d4851f142c5f","Type":"ContainerStarted","Data":"c29bf2bcb45cf3c033aec0797b2424b729c02eeb92df71b09842dfb40810b852"} Feb 19 05:38:20 crc kubenswrapper[5012]: I0219 05:38:20.461843 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:20 crc kubenswrapper[5012]: I0219 05:38:20.491287 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-c4jbq" podStartSLOduration=3.491267369 podStartE2EDuration="3.491267369s" podCreationTimestamp="2026-02-19 05:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:38:20.48063401 +0000 UTC m=+796.513956579" watchObservedRunningTime="2026-02-19 05:38:20.491267369 +0000 UTC m=+796.524589938" Feb 19 05:38:21 crc kubenswrapper[5012]: I0219 05:38:21.656009 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:21 crc kubenswrapper[5012]: I0219 05:38:21.671291 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:21 crc kubenswrapper[5012]: I0219 05:38:21.903380 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-87ct4" Feb 19 05:38:21 crc kubenswrapper[5012]: W0219 05:38:21.944965 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82cb6684_3937_45f8_9f18_56940e88f480.slice/crio-62d5f1ffe62d16ec514817c37035e1cdf01e0e0c063a4174ba1e9ec6cde99c86 WatchSource:0}: Error finding container 62d5f1ffe62d16ec514817c37035e1cdf01e0e0c063a4174ba1e9ec6cde99c86: Status 404 returned error can't find the container with id 62d5f1ffe62d16ec514817c37035e1cdf01e0e0c063a4174ba1e9ec6cde99c86 Feb 19 05:38:22 crc kubenswrapper[5012]: I0219 05:38:22.474813 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-87ct4" event={"ID":"82cb6684-3937-45f8-9f18-56940e88f480","Type":"ContainerStarted","Data":"ef6385321a50dbc57d893ed85934d5e9ef181b7d5f0ccdf578715dca403f4b05"} Feb 19 05:38:22 crc kubenswrapper[5012]: I0219 05:38:22.475181 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-87ct4" event={"ID":"82cb6684-3937-45f8-9f18-56940e88f480","Type":"ContainerStarted","Data":"301c02fa8c0c52af6793aba7cbeb93211116a660f7266c7671cf8aa6806945a9"} Feb 19 05:38:22 crc kubenswrapper[5012]: I0219 05:38:22.475195 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-87ct4" event={"ID":"82cb6684-3937-45f8-9f18-56940e88f480","Type":"ContainerStarted","Data":"62d5f1ffe62d16ec514817c37035e1cdf01e0e0c063a4174ba1e9ec6cde99c86"} Feb 19 05:38:22 crc kubenswrapper[5012]: I0219 05:38:22.475613 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-87ct4" Feb 19 05:38:22 crc kubenswrapper[5012]: I0219 05:38:22.494190 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-87ct4" podStartSLOduration=5.494168013 podStartE2EDuration="5.494168013s" podCreationTimestamp="2026-02-19 05:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:38:22.492776819 +0000 UTC m=+798.526099388" watchObservedRunningTime="2026-02-19 05:38:22.494168013 +0000 UTC m=+798.527490582" Feb 19 05:38:26 crc kubenswrapper[5012]: I0219 05:38:26.512793 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" event={"ID":"431a9bf4-479e-4255-9664-554c80fa4376","Type":"ContainerStarted","Data":"52d5adf8a2b549a8d58613d7fa52bc091548b69823b737bae1e84a5ab8dc0e37"} Feb 19 05:38:26 crc kubenswrapper[5012]: I0219 05:38:26.513707 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:26 crc kubenswrapper[5012]: I0219 05:38:26.519356 5012 generic.go:334] "Generic (PLEG): container finished" podID="48b2548c-eb36-4c42-a84f-2d3f2084a46f" containerID="e82f60ebe1a7c9228a0dd9dfa0ba5e61c52b9b60d5402b45431873868ef774f5" exitCode=0 Feb 19 05:38:26 crc kubenswrapper[5012]: I0219 05:38:26.519417 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerDied","Data":"e82f60ebe1a7c9228a0dd9dfa0ba5e61c52b9b60d5402b45431873868ef774f5"} Feb 19 05:38:26 crc kubenswrapper[5012]: I0219 05:38:26.533829 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" podStartSLOduration=1.748000115 podStartE2EDuration="9.533797687s" podCreationTimestamp="2026-02-19 05:38:17 +0000 UTC" firstStartedPulling="2026-02-19 05:38:18.471114995 +0000 UTC m=+794.504437564" lastFinishedPulling="2026-02-19 05:38:26.256912537 +0000 UTC m=+802.290235136" observedRunningTime="2026-02-19 05:38:26.53271129 +0000 UTC m=+802.566033889" watchObservedRunningTime="2026-02-19 05:38:26.533797687 +0000 UTC m=+802.567120296" Feb 19 05:38:27 crc kubenswrapper[5012]: I0219 05:38:27.531239 5012 generic.go:334] "Generic (PLEG): container finished" podID="48b2548c-eb36-4c42-a84f-2d3f2084a46f" containerID="5b2a2771f976c94f1be824c3868c214d5ed383407f23c4fbcea458e4fa09c2f0" exitCode=0 Feb 19 05:38:27 crc kubenswrapper[5012]: I0219 05:38:27.531365 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerDied","Data":"5b2a2771f976c94f1be824c3868c214d5ed383407f23c4fbcea458e4fa09c2f0"} Feb 19 05:38:28 crc kubenswrapper[5012]: I0219 05:38:28.542867 5012 generic.go:334] "Generic (PLEG): container finished" podID="48b2548c-eb36-4c42-a84f-2d3f2084a46f" containerID="128cd9457413a787da8d23cd5c8a89e8704790be28bef11dd04f186d32cfb420" exitCode=0 Feb 19 05:38:28 crc kubenswrapper[5012]: I0219 05:38:28.543878 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerDied","Data":"128cd9457413a787da8d23cd5c8a89e8704790be28bef11dd04f186d32cfb420"} Feb 19 05:38:29 crc kubenswrapper[5012]: I0219 05:38:29.558650 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerStarted","Data":"74c0ed099b78d31089c47af49ea78f92b53b60561adfc44aa374f9cdb0f876c0"} Feb 19 05:38:29 crc kubenswrapper[5012]: I0219 05:38:29.558927 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerStarted","Data":"9a9d867988893a25143807f3026e83a5aa4c9fbaeba526284c69f33633a86e39"} Feb 19 05:38:29 crc kubenswrapper[5012]: I0219 05:38:29.558938 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerStarted","Data":"e066bfe7f144cb2fbbe7968e4e1dcd6d95af98f24db9002e64087029223e6f83"} Feb 19 05:38:29 crc kubenswrapper[5012]: I0219 05:38:29.558948 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerStarted","Data":"7a9632ee62eb68c701e61b3a8978f8319de995e896a7aa477545f61b3b34d753"} Feb 19 05:38:29 crc kubenswrapper[5012]: I0219 05:38:29.558956 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerStarted","Data":"10f3e58db8955afed9058e0af3b2a44a2ebd305a90eb000d3871104f0420fd86"} Feb 19 05:38:30 crc kubenswrapper[5012]: I0219 05:38:30.572521 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerStarted","Data":"4afe7c497f913cd1fc74bdc2f49214c5b4fc3750cee7ceee45fae8c88e617c79"} Feb 19 05:38:30 crc kubenswrapper[5012]: I0219 05:38:30.572954 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:30 crc kubenswrapper[5012]: I0219 05:38:30.613272 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4d76m" podStartSLOduration=6.393057364 podStartE2EDuration="13.613222908s" podCreationTimestamp="2026-02-19 05:38:17 +0000 UTC" firstStartedPulling="2026-02-19 05:38:19.008401843 +0000 UTC m=+795.041724452" lastFinishedPulling="2026-02-19 05:38:26.228567387 +0000 UTC m=+802.261889996" observedRunningTime="2026-02-19 05:38:30.603713897 +0000 UTC m=+806.637036506" watchObservedRunningTime="2026-02-19 05:38:30.613222908 +0000 UTC m=+806.646545517" Feb 19 05:38:33 crc kubenswrapper[5012]: I0219 05:38:33.824920 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:33 crc kubenswrapper[5012]: I0219 05:38:33.889289 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:38 crc kubenswrapper[5012]: I0219 05:38:38.238799 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:38 crc kubenswrapper[5012]: I0219 05:38:38.830714 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:38 crc kubenswrapper[5012]: I0219 05:38:38.927929 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:40 crc kubenswrapper[5012]: I0219 05:38:40.813676 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rtnz8"] Feb 19 05:38:40 crc kubenswrapper[5012]: I0219 05:38:40.817200 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:40 crc kubenswrapper[5012]: I0219 05:38:40.837097 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtnz8"] Feb 19 05:38:40 crc kubenswrapper[5012]: I0219 05:38:40.969696 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27758\" (UniqueName: \"kubernetes.io/projected/2ad2fcc6-eb34-4443-b76a-08bb5891507f-kube-api-access-27758\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:40 crc kubenswrapper[5012]: I0219 05:38:40.969791 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-utilities\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:40 crc kubenswrapper[5012]: I0219 05:38:40.969827 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-catalog-content\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.070908 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-catalog-content\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.071114 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27758\" (UniqueName: \"kubernetes.io/projected/2ad2fcc6-eb34-4443-b76a-08bb5891507f-kube-api-access-27758\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.071685 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-utilities\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.071768 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-catalog-content\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.072238 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-utilities\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.106508 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27758\" (UniqueName: \"kubernetes.io/projected/2ad2fcc6-eb34-4443-b76a-08bb5891507f-kube-api-access-27758\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.176254 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.474953 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtnz8"] Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.667396 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtnz8" event={"ID":"2ad2fcc6-eb34-4443-b76a-08bb5891507f","Type":"ContainerStarted","Data":"9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990"} Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.667801 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtnz8" event={"ID":"2ad2fcc6-eb34-4443-b76a-08bb5891507f","Type":"ContainerStarted","Data":"f7a540a730396232505345a52eeca82daa986d46ed9d4f09815e20a2f47f7abf"} Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.908941 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-87ct4" Feb 19 05:38:42 crc kubenswrapper[5012]: I0219 05:38:42.679947 5012 generic.go:334] "Generic (PLEG): container finished" podID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerID="9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990" exitCode=0 Feb 19 05:38:42 crc kubenswrapper[5012]: I0219 05:38:42.680042 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtnz8" event={"ID":"2ad2fcc6-eb34-4443-b76a-08bb5891507f","Type":"ContainerDied","Data":"9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990"} Feb 19 05:38:43 crc kubenswrapper[5012]: I0219 05:38:43.691511 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtnz8" event={"ID":"2ad2fcc6-eb34-4443-b76a-08bb5891507f","Type":"ContainerStarted","Data":"37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1"} Feb 19 05:38:44 crc kubenswrapper[5012]: I0219 05:38:44.704285 5012 generic.go:334] "Generic (PLEG): container finished" podID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerID="37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1" exitCode=0 Feb 19 05:38:44 crc kubenswrapper[5012]: I0219 05:38:44.726090 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtnz8" event={"ID":"2ad2fcc6-eb34-4443-b76a-08bb5891507f","Type":"ContainerDied","Data":"37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1"} Feb 19 05:38:45 crc kubenswrapper[5012]: I0219 05:38:45.713997 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtnz8" event={"ID":"2ad2fcc6-eb34-4443-b76a-08bb5891507f","Type":"ContainerStarted","Data":"8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee"} Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.382399 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rtnz8" podStartSLOduration=5.972075227 podStartE2EDuration="8.382373659s" podCreationTimestamp="2026-02-19 05:38:40 +0000 UTC" firstStartedPulling="2026-02-19 05:38:42.683062015 +0000 UTC m=+818.716384624" lastFinishedPulling="2026-02-19 05:38:45.093360477 +0000 UTC m=+821.126683056" observedRunningTime="2026-02-19 05:38:46.419051457 +0000 UTC m=+822.452374026" watchObservedRunningTime="2026-02-19 05:38:48.382373659 +0000 UTC m=+824.415696258" Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.386383 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cl447"] Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.387577 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.390847 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.391158 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.400054 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-rgf79" Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.401832 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cl447"] Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.549255 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vng4w\" (UniqueName: \"kubernetes.io/projected/797c14cf-1b4d-4b4e-9dc5-4843e2e77cef-kube-api-access-vng4w\") pod \"openstack-operator-index-cl447\" (UID: \"797c14cf-1b4d-4b4e-9dc5-4843e2e77cef\") " pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.650408 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vng4w\" (UniqueName: \"kubernetes.io/projected/797c14cf-1b4d-4b4e-9dc5-4843e2e77cef-kube-api-access-vng4w\") pod \"openstack-operator-index-cl447\" (UID: \"797c14cf-1b4d-4b4e-9dc5-4843e2e77cef\") " pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.692779 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vng4w\" (UniqueName: \"kubernetes.io/projected/797c14cf-1b4d-4b4e-9dc5-4843e2e77cef-kube-api-access-vng4w\") pod \"openstack-operator-index-cl447\" (UID: \"797c14cf-1b4d-4b4e-9dc5-4843e2e77cef\") " pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.729703 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:38:49 crc kubenswrapper[5012]: I0219 05:38:49.019833 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cl447"] Feb 19 05:38:49 crc kubenswrapper[5012]: W0219 05:38:49.024161 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod797c14cf_1b4d_4b4e_9dc5_4843e2e77cef.slice/crio-ea4fb0159267403236388ac9641dfe7f09edddb7388bdf7ba5591f86c59338f0 WatchSource:0}: Error finding container ea4fb0159267403236388ac9641dfe7f09edddb7388bdf7ba5591f86c59338f0: Status 404 returned error can't find the container with id ea4fb0159267403236388ac9641dfe7f09edddb7388bdf7ba5591f86c59338f0 Feb 19 05:38:49 crc kubenswrapper[5012]: I0219 05:38:49.748804 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cl447" event={"ID":"797c14cf-1b4d-4b4e-9dc5-4843e2e77cef","Type":"ContainerStarted","Data":"ea4fb0159267403236388ac9641dfe7f09edddb7388bdf7ba5591f86c59338f0"} Feb 19 05:38:50 crc kubenswrapper[5012]: I0219 05:38:50.760396 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cl447" event={"ID":"797c14cf-1b4d-4b4e-9dc5-4843e2e77cef","Type":"ContainerStarted","Data":"25719e595a9d519d2b875b4a43941e7e665e4dc860031e497d4b63dad331962c"} Feb 19 05:38:50 crc kubenswrapper[5012]: I0219 05:38:50.798691 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cl447" podStartSLOduration=1.865319637 podStartE2EDuration="2.798661988s" podCreationTimestamp="2026-02-19 05:38:48 +0000 UTC" firstStartedPulling="2026-02-19 05:38:49.027927513 +0000 UTC m=+825.061250092" lastFinishedPulling="2026-02-19 05:38:49.961269834 +0000 UTC m=+825.994592443" observedRunningTime="2026-02-19 05:38:50.788826489 +0000 UTC m=+826.822149088" watchObservedRunningTime="2026-02-19 05:38:50.798661988 +0000 UTC m=+826.831984587" Feb 19 05:38:51 crc kubenswrapper[5012]: I0219 05:38:51.177030 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:51 crc kubenswrapper[5012]: I0219 05:38:51.177438 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:51 crc kubenswrapper[5012]: I0219 05:38:51.239166 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:51 crc kubenswrapper[5012]: I0219 05:38:51.841863 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:53 crc kubenswrapper[5012]: I0219 05:38:53.368883 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtnz8"] Feb 19 05:38:54 crc kubenswrapper[5012]: I0219 05:38:54.792648 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rtnz8" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerName="registry-server" containerID="cri-o://8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee" gracePeriod=2 Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.262351 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.360953 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-catalog-content\") pod \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.361050 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27758\" (UniqueName: \"kubernetes.io/projected/2ad2fcc6-eb34-4443-b76a-08bb5891507f-kube-api-access-27758\") pod \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.361216 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-utilities\") pod \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.362836 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-utilities" (OuterVolumeSpecName: "utilities") pod "2ad2fcc6-eb34-4443-b76a-08bb5891507f" (UID: "2ad2fcc6-eb34-4443-b76a-08bb5891507f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.371921 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ad2fcc6-eb34-4443-b76a-08bb5891507f-kube-api-access-27758" (OuterVolumeSpecName: "kube-api-access-27758") pod "2ad2fcc6-eb34-4443-b76a-08bb5891507f" (UID: "2ad2fcc6-eb34-4443-b76a-08bb5891507f"). InnerVolumeSpecName "kube-api-access-27758". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.397203 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ad2fcc6-eb34-4443-b76a-08bb5891507f" (UID: "2ad2fcc6-eb34-4443-b76a-08bb5891507f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.463449 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.463499 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.463524 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27758\" (UniqueName: \"kubernetes.io/projected/2ad2fcc6-eb34-4443-b76a-08bb5891507f-kube-api-access-27758\") on node \"crc\" DevicePath \"\"" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.804937 5012 generic.go:334] "Generic (PLEG): container finished" podID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerID="8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee" exitCode=0 Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.804983 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtnz8" event={"ID":"2ad2fcc6-eb34-4443-b76a-08bb5891507f","Type":"ContainerDied","Data":"8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee"} Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.805038 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtnz8" event={"ID":"2ad2fcc6-eb34-4443-b76a-08bb5891507f","Type":"ContainerDied","Data":"f7a540a730396232505345a52eeca82daa986d46ed9d4f09815e20a2f47f7abf"} Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.805051 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.805062 5012 scope.go:117] "RemoveContainer" containerID="8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.830341 5012 scope.go:117] "RemoveContainer" containerID="37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.860611 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtnz8"] Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.868112 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtnz8"] Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.869472 5012 scope.go:117] "RemoveContainer" containerID="9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.894944 5012 scope.go:117] "RemoveContainer" containerID="8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee" Feb 19 05:38:55 crc kubenswrapper[5012]: E0219 05:38:55.895567 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee\": container with ID starting with 8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee not found: ID does not exist" containerID="8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.895621 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee"} err="failed to get container status \"8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee\": rpc error: code = NotFound desc = could not find container \"8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee\": container with ID starting with 8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee not found: ID does not exist" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.895657 5012 scope.go:117] "RemoveContainer" containerID="37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1" Feb 19 05:38:55 crc kubenswrapper[5012]: E0219 05:38:55.896102 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1\": container with ID starting with 37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1 not found: ID does not exist" containerID="37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.896202 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1"} err="failed to get container status \"37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1\": rpc error: code = NotFound desc = could not find container \"37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1\": container with ID starting with 37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1 not found: ID does not exist" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.896240 5012 scope.go:117] "RemoveContainer" containerID="9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990" Feb 19 05:38:55 crc kubenswrapper[5012]: E0219 05:38:55.896782 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990\": container with ID starting with 9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990 not found: ID does not exist" containerID="9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.896826 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990"} err="failed to get container status \"9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990\": rpc error: code = NotFound desc = could not find container \"9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990\": container with ID starting with 9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990 not found: ID does not exist" Feb 19 05:38:56 crc kubenswrapper[5012]: I0219 05:38:56.715355 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" path="/var/lib/kubelet/pods/2ad2fcc6-eb34-4443-b76a-08bb5891507f/volumes" Feb 19 05:38:58 crc kubenswrapper[5012]: I0219 05:38:58.731747 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:38:58 crc kubenswrapper[5012]: I0219 05:38:58.732190 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:38:58 crc kubenswrapper[5012]: I0219 05:38:58.774683 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:38:58 crc kubenswrapper[5012]: I0219 05:38:58.878965 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.631988 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q"] Feb 19 05:39:01 crc kubenswrapper[5012]: E0219 05:39:01.632396 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerName="extract-content" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.632443 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerName="extract-content" Feb 19 05:39:01 crc kubenswrapper[5012]: E0219 05:39:01.632459 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerName="extract-utilities" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.632472 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerName="extract-utilities" Feb 19 05:39:01 crc kubenswrapper[5012]: E0219 05:39:01.632504 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerName="registry-server" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.632517 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerName="registry-server" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.632731 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerName="registry-server" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.634234 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.639196 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-gxsjj" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.651005 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q"] Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.777152 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.777268 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.777463 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qtns\" (UniqueName: \"kubernetes.io/projected/59bb7d65-7d8f-487c-b586-7cd4be8eab12-kube-api-access-7qtns\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.878642 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.878781 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qtns\" (UniqueName: \"kubernetes.io/projected/59bb7d65-7d8f-487c-b586-7cd4be8eab12-kube-api-access-7qtns\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.878883 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.879714 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.879778 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.918601 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qtns\" (UniqueName: \"kubernetes.io/projected/59bb7d65-7d8f-487c-b586-7cd4be8eab12-kube-api-access-7qtns\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.970367 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:02 crc kubenswrapper[5012]: I0219 05:39:02.469282 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q"] Feb 19 05:39:02 crc kubenswrapper[5012]: W0219 05:39:02.479428 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59bb7d65_7d8f_487c_b586_7cd4be8eab12.slice/crio-40a0c1f8cde09c043d274a91e90af16c7f1474f53bf23f231af6a4275f4ccc9e WatchSource:0}: Error finding container 40a0c1f8cde09c043d274a91e90af16c7f1474f53bf23f231af6a4275f4ccc9e: Status 404 returned error can't find the container with id 40a0c1f8cde09c043d274a91e90af16c7f1474f53bf23f231af6a4275f4ccc9e Feb 19 05:39:02 crc kubenswrapper[5012]: I0219 05:39:02.875557 5012 generic.go:334] "Generic (PLEG): container finished" podID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerID="754d7611c5f9ed36a19fe10c3aa0b56c3acd6d75c5d1c539a226d17d0986358d" exitCode=0 Feb 19 05:39:02 crc kubenswrapper[5012]: I0219 05:39:02.875617 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" event={"ID":"59bb7d65-7d8f-487c-b586-7cd4be8eab12","Type":"ContainerDied","Data":"754d7611c5f9ed36a19fe10c3aa0b56c3acd6d75c5d1c539a226d17d0986358d"} Feb 19 05:39:02 crc kubenswrapper[5012]: I0219 05:39:02.877027 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" event={"ID":"59bb7d65-7d8f-487c-b586-7cd4be8eab12","Type":"ContainerStarted","Data":"40a0c1f8cde09c043d274a91e90af16c7f1474f53bf23f231af6a4275f4ccc9e"} Feb 19 05:39:03 crc kubenswrapper[5012]: I0219 05:39:03.888402 5012 generic.go:334] "Generic (PLEG): container finished" podID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerID="7abefeba9ce892cb36dc582096096a2870b6d5345619dcb874b129d34ff33c4f" exitCode=0 Feb 19 05:39:03 crc kubenswrapper[5012]: I0219 05:39:03.888464 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" event={"ID":"59bb7d65-7d8f-487c-b586-7cd4be8eab12","Type":"ContainerDied","Data":"7abefeba9ce892cb36dc582096096a2870b6d5345619dcb874b129d34ff33c4f"} Feb 19 05:39:04 crc kubenswrapper[5012]: I0219 05:39:04.901258 5012 generic.go:334] "Generic (PLEG): container finished" podID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerID="c1232041e5886d1fb567c5bd5a603a4dc061059e78ff14b1132df2e546ac4bdc" exitCode=0 Feb 19 05:39:04 crc kubenswrapper[5012]: I0219 05:39:04.901347 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" event={"ID":"59bb7d65-7d8f-487c-b586-7cd4be8eab12","Type":"ContainerDied","Data":"c1232041e5886d1fb567c5bd5a603a4dc061059e78ff14b1132df2e546ac4bdc"} Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.275720 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.453910 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qtns\" (UniqueName: \"kubernetes.io/projected/59bb7d65-7d8f-487c-b586-7cd4be8eab12-kube-api-access-7qtns\") pod \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.454001 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-util\") pod \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.454088 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-bundle\") pod \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.455182 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-bundle" (OuterVolumeSpecName: "bundle") pod "59bb7d65-7d8f-487c-b586-7cd4be8eab12" (UID: "59bb7d65-7d8f-487c-b586-7cd4be8eab12"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.463395 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59bb7d65-7d8f-487c-b586-7cd4be8eab12-kube-api-access-7qtns" (OuterVolumeSpecName: "kube-api-access-7qtns") pod "59bb7d65-7d8f-487c-b586-7cd4be8eab12" (UID: "59bb7d65-7d8f-487c-b586-7cd4be8eab12"). InnerVolumeSpecName "kube-api-access-7qtns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.483165 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-util" (OuterVolumeSpecName: "util") pod "59bb7d65-7d8f-487c-b586-7cd4be8eab12" (UID: "59bb7d65-7d8f-487c-b586-7cd4be8eab12"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.556599 5012 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.556646 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qtns\" (UniqueName: \"kubernetes.io/projected/59bb7d65-7d8f-487c-b586-7cd4be8eab12-kube-api-access-7qtns\") on node \"crc\" DevicePath \"\"" Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.556668 5012 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-util\") on node \"crc\" DevicePath \"\"" Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.922798 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" event={"ID":"59bb7d65-7d8f-487c-b586-7cd4be8eab12","Type":"ContainerDied","Data":"40a0c1f8cde09c043d274a91e90af16c7f1474f53bf23f231af6a4275f4ccc9e"} Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.922872 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40a0c1f8cde09c043d274a91e90af16c7f1474f53bf23f231af6a4275f4ccc9e" Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.923222 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.921277 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk"] Feb 19 05:39:11 crc kubenswrapper[5012]: E0219 05:39:11.922052 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerName="util" Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.922067 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerName="util" Feb 19 05:39:11 crc kubenswrapper[5012]: E0219 05:39:11.922087 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerName="extract" Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.922095 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerName="extract" Feb 19 05:39:11 crc kubenswrapper[5012]: E0219 05:39:11.922112 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerName="pull" Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.922121 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerName="pull" Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.922257 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerName="extract" Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.922788 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.924842 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-w5pqc" Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.950025 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk"] Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.958946 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98jdp\" (UniqueName: \"kubernetes.io/projected/76b34ac4-96f1-4bbc-9969-eb3e1cfc2159-kube-api-access-98jdp\") pod \"openstack-operator-controller-init-6679bf9b57-q57bk\" (UID: \"76b34ac4-96f1-4bbc-9969-eb3e1cfc2159\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" Feb 19 05:39:12 crc kubenswrapper[5012]: I0219 05:39:12.060385 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98jdp\" (UniqueName: \"kubernetes.io/projected/76b34ac4-96f1-4bbc-9969-eb3e1cfc2159-kube-api-access-98jdp\") pod \"openstack-operator-controller-init-6679bf9b57-q57bk\" (UID: \"76b34ac4-96f1-4bbc-9969-eb3e1cfc2159\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" Feb 19 05:39:12 crc kubenswrapper[5012]: I0219 05:39:12.095852 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98jdp\" (UniqueName: \"kubernetes.io/projected/76b34ac4-96f1-4bbc-9969-eb3e1cfc2159-kube-api-access-98jdp\") pod \"openstack-operator-controller-init-6679bf9b57-q57bk\" (UID: \"76b34ac4-96f1-4bbc-9969-eb3e1cfc2159\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" Feb 19 05:39:12 crc kubenswrapper[5012]: I0219 05:39:12.244059 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" Feb 19 05:39:12 crc kubenswrapper[5012]: I0219 05:39:12.591288 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk"] Feb 19 05:39:12 crc kubenswrapper[5012]: I0219 05:39:12.971952 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" event={"ID":"76b34ac4-96f1-4bbc-9969-eb3e1cfc2159","Type":"ContainerStarted","Data":"be0cf90e7840d58f063834a172e481be88cde3d92cb7d50cf620c7fd753dc6bb"} Feb 19 05:39:18 crc kubenswrapper[5012]: I0219 05:39:18.014582 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" event={"ID":"76b34ac4-96f1-4bbc-9969-eb3e1cfc2159","Type":"ContainerStarted","Data":"15c525f23e864e23a6f6f84b762d46a4f648932d213342dbd7d85697814c187f"} Feb 19 05:39:18 crc kubenswrapper[5012]: I0219 05:39:18.015100 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" Feb 19 05:39:18 crc kubenswrapper[5012]: I0219 05:39:18.064412 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" podStartSLOduration=2.6056852619999997 podStartE2EDuration="7.064385836s" podCreationTimestamp="2026-02-19 05:39:11 +0000 UTC" firstStartedPulling="2026-02-19 05:39:12.609935823 +0000 UTC m=+848.643258412" lastFinishedPulling="2026-02-19 05:39:17.068636417 +0000 UTC m=+853.101958986" observedRunningTime="2026-02-19 05:39:18.059127518 +0000 UTC m=+854.092450117" watchObservedRunningTime="2026-02-19 05:39:18.064385836 +0000 UTC m=+854.097708445" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.199549 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5vmsf"] Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.202221 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.212395 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vmsf"] Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.303072 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbr24\" (UniqueName: \"kubernetes.io/projected/88253e52-7e63-4042-8eee-d414c388e9c8-kube-api-access-lbr24\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.303452 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-utilities\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.303665 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-catalog-content\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.404704 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-utilities\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.404828 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-catalog-content\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.404871 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbr24\" (UniqueName: \"kubernetes.io/projected/88253e52-7e63-4042-8eee-d414c388e9c8-kube-api-access-lbr24\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.406257 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-catalog-content\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.406344 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-utilities\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.449325 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbr24\" (UniqueName: \"kubernetes.io/projected/88253e52-7e63-4042-8eee-d414c388e9c8-kube-api-access-lbr24\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.537163 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.753263 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vmsf"] Feb 19 05:39:21 crc kubenswrapper[5012]: I0219 05:39:21.048585 5012 generic.go:334] "Generic (PLEG): container finished" podID="88253e52-7e63-4042-8eee-d414c388e9c8" containerID="ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a" exitCode=0 Feb 19 05:39:21 crc kubenswrapper[5012]: I0219 05:39:21.048979 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vmsf" event={"ID":"88253e52-7e63-4042-8eee-d414c388e9c8","Type":"ContainerDied","Data":"ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a"} Feb 19 05:39:21 crc kubenswrapper[5012]: I0219 05:39:21.049019 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vmsf" event={"ID":"88253e52-7e63-4042-8eee-d414c388e9c8","Type":"ContainerStarted","Data":"e81e24ff31c9e0fe526a55507b5fb0fce47a5fce655d3f1a64e54d56ef44547f"} Feb 19 05:39:22 crc kubenswrapper[5012]: I0219 05:39:22.247739 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" Feb 19 05:39:23 crc kubenswrapper[5012]: I0219 05:39:23.072586 5012 generic.go:334] "Generic (PLEG): container finished" podID="88253e52-7e63-4042-8eee-d414c388e9c8" containerID="584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f" exitCode=0 Feb 19 05:39:23 crc kubenswrapper[5012]: I0219 05:39:23.072633 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vmsf" event={"ID":"88253e52-7e63-4042-8eee-d414c388e9c8","Type":"ContainerDied","Data":"584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f"} Feb 19 05:39:24 crc kubenswrapper[5012]: I0219 05:39:24.100865 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vmsf" event={"ID":"88253e52-7e63-4042-8eee-d414c388e9c8","Type":"ContainerStarted","Data":"c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3"} Feb 19 05:39:24 crc kubenswrapper[5012]: I0219 05:39:24.135166 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5vmsf" podStartSLOduration=1.7679633479999999 podStartE2EDuration="4.13514407s" podCreationTimestamp="2026-02-19 05:39:20 +0000 UTC" firstStartedPulling="2026-02-19 05:39:21.050714839 +0000 UTC m=+857.084037448" lastFinishedPulling="2026-02-19 05:39:23.417895611 +0000 UTC m=+859.451218170" observedRunningTime="2026-02-19 05:39:24.13020171 +0000 UTC m=+860.163524319" watchObservedRunningTime="2026-02-19 05:39:24.13514407 +0000 UTC m=+860.168466649" Feb 19 05:39:30 crc kubenswrapper[5012]: I0219 05:39:30.537930 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:30 crc kubenswrapper[5012]: I0219 05:39:30.538341 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:30 crc kubenswrapper[5012]: I0219 05:39:30.589553 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:31 crc kubenswrapper[5012]: I0219 05:39:31.208281 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:31 crc kubenswrapper[5012]: I0219 05:39:31.280469 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5vmsf"] Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.169590 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5vmsf" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" containerName="registry-server" containerID="cri-o://c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3" gracePeriod=2 Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.646914 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.696088 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-utilities\") pod \"88253e52-7e63-4042-8eee-d414c388e9c8\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.696173 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbr24\" (UniqueName: \"kubernetes.io/projected/88253e52-7e63-4042-8eee-d414c388e9c8-kube-api-access-lbr24\") pod \"88253e52-7e63-4042-8eee-d414c388e9c8\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.696197 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-catalog-content\") pod \"88253e52-7e63-4042-8eee-d414c388e9c8\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.697098 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-utilities" (OuterVolumeSpecName: "utilities") pod "88253e52-7e63-4042-8eee-d414c388e9c8" (UID: "88253e52-7e63-4042-8eee-d414c388e9c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.700763 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88253e52-7e63-4042-8eee-d414c388e9c8-kube-api-access-lbr24" (OuterVolumeSpecName: "kube-api-access-lbr24") pod "88253e52-7e63-4042-8eee-d414c388e9c8" (UID: "88253e52-7e63-4042-8eee-d414c388e9c8"). InnerVolumeSpecName "kube-api-access-lbr24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.759920 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88253e52-7e63-4042-8eee-d414c388e9c8" (UID: "88253e52-7e63-4042-8eee-d414c388e9c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.798032 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.798063 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbr24\" (UniqueName: \"kubernetes.io/projected/88253e52-7e63-4042-8eee-d414c388e9c8-kube-api-access-lbr24\") on node \"crc\" DevicePath \"\"" Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.798074 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.177103 5012 generic.go:334] "Generic (PLEG): container finished" podID="88253e52-7e63-4042-8eee-d414c388e9c8" containerID="c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3" exitCode=0 Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.177145 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vmsf" event={"ID":"88253e52-7e63-4042-8eee-d414c388e9c8","Type":"ContainerDied","Data":"c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3"} Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.177191 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vmsf" event={"ID":"88253e52-7e63-4042-8eee-d414c388e9c8","Type":"ContainerDied","Data":"e81e24ff31c9e0fe526a55507b5fb0fce47a5fce655d3f1a64e54d56ef44547f"} Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.177188 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.177212 5012 scope.go:117] "RemoveContainer" containerID="c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.195617 5012 scope.go:117] "RemoveContainer" containerID="584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.225760 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5vmsf"] Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.229127 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5vmsf"] Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.229411 5012 scope.go:117] "RemoveContainer" containerID="ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.264127 5012 scope.go:117] "RemoveContainer" containerID="c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3" Feb 19 05:39:34 crc kubenswrapper[5012]: E0219 05:39:34.264572 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3\": container with ID starting with c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3 not found: ID does not exist" containerID="c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.264611 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3"} err="failed to get container status \"c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3\": rpc error: code = NotFound desc = could not find container \"c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3\": container with ID starting with c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3 not found: ID does not exist" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.264635 5012 scope.go:117] "RemoveContainer" containerID="584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f" Feb 19 05:39:34 crc kubenswrapper[5012]: E0219 05:39:34.268573 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f\": container with ID starting with 584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f not found: ID does not exist" containerID="584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.268606 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f"} err="failed to get container status \"584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f\": rpc error: code = NotFound desc = could not find container \"584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f\": container with ID starting with 584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f not found: ID does not exist" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.268628 5012 scope.go:117] "RemoveContainer" containerID="ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a" Feb 19 05:39:34 crc kubenswrapper[5012]: E0219 05:39:34.272599 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a\": container with ID starting with ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a not found: ID does not exist" containerID="ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.272625 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a"} err="failed to get container status \"ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a\": rpc error: code = NotFound desc = could not find container \"ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a\": container with ID starting with ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a not found: ID does not exist" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.714833 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" path="/var/lib/kubelet/pods/88253e52-7e63-4042-8eee-d414c388e9c8/volumes" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.366417 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n"] Feb 19 05:39:43 crc kubenswrapper[5012]: E0219 05:39:43.367060 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" containerName="registry-server" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.367072 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" containerName="registry-server" Feb 19 05:39:43 crc kubenswrapper[5012]: E0219 05:39:43.367088 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" containerName="extract-utilities" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.367093 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" containerName="extract-utilities" Feb 19 05:39:43 crc kubenswrapper[5012]: E0219 05:39:43.367103 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" containerName="extract-content" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.367111 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" containerName="extract-content" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.367219 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" containerName="registry-server" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.367602 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.370011 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-5fbns" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.373980 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.383968 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.384842 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.386634 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-fsczk" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.401823 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.424371 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.425194 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.428204 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.428653 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-c2dnh" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.429046 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.433374 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.433493 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-nbvp5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.450558 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.454559 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.455289 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.458585 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.459180 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.459753 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-7dgvb" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.461729 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-llgkh" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.461919 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.469619 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.470389 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.471539 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-jwdvc" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.481609 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.499170 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.510099 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.522990 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.523752 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.529463 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.531617 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-dcgvb" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.538109 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.538908 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.544392 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.545325 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.545825 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-2b6nl" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.546638 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpjpx\" (UniqueName: \"kubernetes.io/projected/0cc1b41b-fbf6-4d0c-b721-dcad09c03feb-kube-api-access-vpjpx\") pod \"barbican-operator-controller-manager-868647ff47-xzk2n\" (UID: \"0cc1b41b-fbf6-4d0c-b721-dcad09c03feb\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.546680 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrp6c\" (UniqueName: \"kubernetes.io/projected/11d49fcd-6e31-47e5-84a1-c6ae972e13cb-kube-api-access-jrp6c\") pod \"designate-operator-controller-manager-6d8bf5c495-kt4nw\" (UID: \"11d49fcd-6e31-47e5-84a1-c6ae972e13cb\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.546713 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k272\" (UniqueName: \"kubernetes.io/projected/8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be-kube-api-access-7k272\") pod \"glance-operator-controller-manager-77987464f4-qzq7x\" (UID: \"8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.546755 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf2sz\" (UniqueName: \"kubernetes.io/projected/8af03a54-ad7a-4684-b5a6-ba83f410e6ed-kube-api-access-kf2sz\") pod \"cinder-operator-controller-manager-5d946d989d-556xv\" (UID: \"8af03a54-ad7a-4684-b5a6-ba83f410e6ed\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.548497 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-lth8m" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.557372 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.563041 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.578366 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.579283 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.585337 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.586267 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.588699 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-2v7sl" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.617226 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.627952 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-zqw88" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649054 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpjpx\" (UniqueName: \"kubernetes.io/projected/0cc1b41b-fbf6-4d0c-b721-dcad09c03feb-kube-api-access-vpjpx\") pod \"barbican-operator-controller-manager-868647ff47-xzk2n\" (UID: \"0cc1b41b-fbf6-4d0c-b721-dcad09c03feb\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649134 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwrcl\" (UniqueName: \"kubernetes.io/projected/4f281b5b-b656-4d4a-b628-d4bfe4fc94f9-kube-api-access-wwrcl\") pod \"horizon-operator-controller-manager-5b9b8895d5-csct6\" (UID: \"4f281b5b-b656-4d4a-b628-d4bfe4fc94f9\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649172 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7rs7\" (UniqueName: \"kubernetes.io/projected/dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43-kube-api-access-r7rs7\") pod \"keystone-operator-controller-manager-b4d948c87-9zkvx\" (UID: \"dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649209 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrp6c\" (UniqueName: \"kubernetes.io/projected/11d49fcd-6e31-47e5-84a1-c6ae972e13cb-kube-api-access-jrp6c\") pod \"designate-operator-controller-manager-6d8bf5c495-kt4nw\" (UID: \"11d49fcd-6e31-47e5-84a1-c6ae972e13cb\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649243 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8t8k\" (UniqueName: \"kubernetes.io/projected/e9e07b56-2724-4046-8a60-81b751fb0588-kube-api-access-z8t8k\") pod \"manila-operator-controller-manager-54f6768c69-ldrx5\" (UID: \"e9e07b56-2724-4046-8a60-81b751fb0588\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649273 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5jhp\" (UniqueName: \"kubernetes.io/projected/996bfd61-486b-432d-9e09-d3a90ff9124c-kube-api-access-h5jhp\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649316 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jcl5\" (UniqueName: \"kubernetes.io/projected/8629b5e4-e6a8-4c47-b76b-f58a26b42912-kube-api-access-6jcl5\") pod \"ironic-operator-controller-manager-554564d7fc-dgldv\" (UID: \"8629b5e4-e6a8-4c47-b76b-f58a26b42912\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649347 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k272\" (UniqueName: \"kubernetes.io/projected/8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be-kube-api-access-7k272\") pod \"glance-operator-controller-manager-77987464f4-qzq7x\" (UID: \"8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649371 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gjkq\" (UniqueName: \"kubernetes.io/projected/bfca307c-9b00-4c12-bdd6-a394b7cc7cfd-kube-api-access-7gjkq\") pod \"heat-operator-controller-manager-69f49c598c-5szxp\" (UID: \"bfca307c-9b00-4c12-bdd6-a394b7cc7cfd\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649420 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf2sz\" (UniqueName: \"kubernetes.io/projected/8af03a54-ad7a-4684-b5a6-ba83f410e6ed-kube-api-access-kf2sz\") pod \"cinder-operator-controller-manager-5d946d989d-556xv\" (UID: \"8af03a54-ad7a-4684-b5a6-ba83f410e6ed\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649455 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.670373 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.693337 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpjpx\" (UniqueName: \"kubernetes.io/projected/0cc1b41b-fbf6-4d0c-b721-dcad09c03feb-kube-api-access-vpjpx\") pod \"barbican-operator-controller-manager-868647ff47-xzk2n\" (UID: \"0cc1b41b-fbf6-4d0c-b721-dcad09c03feb\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.693398 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.694370 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.694433 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf2sz\" (UniqueName: \"kubernetes.io/projected/8af03a54-ad7a-4684-b5a6-ba83f410e6ed-kube-api-access-kf2sz\") pod \"cinder-operator-controller-manager-5d946d989d-556xv\" (UID: \"8af03a54-ad7a-4684-b5a6-ba83f410e6ed\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.696542 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-kmhh8" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.700203 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrp6c\" (UniqueName: \"kubernetes.io/projected/11d49fcd-6e31-47e5-84a1-c6ae972e13cb-kube-api-access-jrp6c\") pod \"designate-operator-controller-manager-6d8bf5c495-kt4nw\" (UID: \"11d49fcd-6e31-47e5-84a1-c6ae972e13cb\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.705804 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.709062 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.709961 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.711245 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k272\" (UniqueName: \"kubernetes.io/projected/8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be-kube-api-access-7k272\") pod \"glance-operator-controller-manager-77987464f4-qzq7x\" (UID: \"8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.714584 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-txkm9" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.736102 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.746851 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.747372 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.748933 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.749715 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751196 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp425\" (UniqueName: \"kubernetes.io/projected/1e872b11-03d6-4d3f-8e06-e10e1e73d917-kube-api-access-lp425\") pod \"mariadb-operator-controller-manager-6994f66f48-rpbt8\" (UID: \"1e872b11-03d6-4d3f-8e06-e10e1e73d917\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751248 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751281 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwrcl\" (UniqueName: \"kubernetes.io/projected/4f281b5b-b656-4d4a-b628-d4bfe4fc94f9-kube-api-access-wwrcl\") pod \"horizon-operator-controller-manager-5b9b8895d5-csct6\" (UID: \"4f281b5b-b656-4d4a-b628-d4bfe4fc94f9\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751323 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7rs7\" (UniqueName: \"kubernetes.io/projected/dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43-kube-api-access-r7rs7\") pod \"keystone-operator-controller-manager-b4d948c87-9zkvx\" (UID: \"dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751360 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8t8k\" (UniqueName: \"kubernetes.io/projected/e9e07b56-2724-4046-8a60-81b751fb0588-kube-api-access-z8t8k\") pod \"manila-operator-controller-manager-54f6768c69-ldrx5\" (UID: \"e9e07b56-2724-4046-8a60-81b751fb0588\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751381 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5jhp\" (UniqueName: \"kubernetes.io/projected/996bfd61-486b-432d-9e09-d3a90ff9124c-kube-api-access-h5jhp\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751399 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jcl5\" (UniqueName: \"kubernetes.io/projected/8629b5e4-e6a8-4c47-b76b-f58a26b42912-kube-api-access-6jcl5\") pod \"ironic-operator-controller-manager-554564d7fc-dgldv\" (UID: \"8629b5e4-e6a8-4c47-b76b-f58a26b42912\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751418 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrjml\" (UniqueName: \"kubernetes.io/projected/b123191d-e55b-4ddc-90ea-abcb34c97be2-kube-api-access-vrjml\") pod \"neutron-operator-controller-manager-64ddbf8bb-27hfc\" (UID: \"b123191d-e55b-4ddc-90ea-abcb34c97be2\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751441 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gjkq\" (UniqueName: \"kubernetes.io/projected/bfca307c-9b00-4c12-bdd6-a394b7cc7cfd-kube-api-access-7gjkq\") pod \"heat-operator-controller-manager-69f49c598c-5szxp\" (UID: \"bfca307c-9b00-4c12-bdd6-a394b7cc7cfd\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" Feb 19 05:39:43 crc kubenswrapper[5012]: E0219 05:39:43.751716 5012 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:43 crc kubenswrapper[5012]: E0219 05:39:43.751757 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert podName:996bfd61-486b-432d-9e09-d3a90ff9124c nodeName:}" failed. No retries permitted until 2026-02-19 05:39:44.25174417 +0000 UTC m=+880.285066739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert") pod "infra-operator-controller-manager-79d975b745-cp8kx" (UID: "996bfd61-486b-432d-9e09-d3a90ff9124c") : secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.752450 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-zmpvr" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.754140 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.755144 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.756615 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.759363 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-dfvzm" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.759855 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.777571 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7rs7\" (UniqueName: \"kubernetes.io/projected/dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43-kube-api-access-r7rs7\") pod \"keystone-operator-controller-manager-b4d948c87-9zkvx\" (UID: \"dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.778865 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gjkq\" (UniqueName: \"kubernetes.io/projected/bfca307c-9b00-4c12-bdd6-a394b7cc7cfd-kube-api-access-7gjkq\") pod \"heat-operator-controller-manager-69f49c598c-5szxp\" (UID: \"bfca307c-9b00-4c12-bdd6-a394b7cc7cfd\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.779659 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwrcl\" (UniqueName: \"kubernetes.io/projected/4f281b5b-b656-4d4a-b628-d4bfe4fc94f9-kube-api-access-wwrcl\") pod \"horizon-operator-controller-manager-5b9b8895d5-csct6\" (UID: \"4f281b5b-b656-4d4a-b628-d4bfe4fc94f9\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.779778 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jcl5\" (UniqueName: \"kubernetes.io/projected/8629b5e4-e6a8-4c47-b76b-f58a26b42912-kube-api-access-6jcl5\") pod \"ironic-operator-controller-manager-554564d7fc-dgldv\" (UID: \"8629b5e4-e6a8-4c47-b76b-f58a26b42912\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.785387 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8t8k\" (UniqueName: \"kubernetes.io/projected/e9e07b56-2724-4046-8a60-81b751fb0588-kube-api-access-z8t8k\") pod \"manila-operator-controller-manager-54f6768c69-ldrx5\" (UID: \"e9e07b56-2724-4046-8a60-81b751fb0588\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.792854 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5jhp\" (UniqueName: \"kubernetes.io/projected/996bfd61-486b-432d-9e09-d3a90ff9124c-kube-api-access-h5jhp\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.799424 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.804558 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.815431 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.829977 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.830965 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.835963 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-kttx7" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.850984 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.852113 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrjml\" (UniqueName: \"kubernetes.io/projected/b123191d-e55b-4ddc-90ea-abcb34c97be2-kube-api-access-vrjml\") pod \"neutron-operator-controller-manager-64ddbf8bb-27hfc\" (UID: \"b123191d-e55b-4ddc-90ea-abcb34c97be2\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.852201 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp425\" (UniqueName: \"kubernetes.io/projected/1e872b11-03d6-4d3f-8e06-e10e1e73d917-kube-api-access-lp425\") pod \"mariadb-operator-controller-manager-6994f66f48-rpbt8\" (UID: \"1e872b11-03d6-4d3f-8e06-e10e1e73d917\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.852235 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg8vb\" (UniqueName: \"kubernetes.io/projected/10e6fa53-581b-4965-8a38-c70a5c61c6d7-kube-api-access-kg8vb\") pod \"ovn-operator-controller-manager-d44cf6b75-25qtj\" (UID: \"10e6fa53-581b-4965-8a38-c70a5c61c6d7\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.852294 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hv98\" (UniqueName: \"kubernetes.io/projected/ef60eda4-7ead-499b-b70f-07a34574096f-kube-api-access-7hv98\") pod \"octavia-operator-controller-manager-69f8888797-pqrs7\" (UID: \"ef60eda4-7ead-499b-b70f-07a34574096f\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.852412 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf7vn\" (UniqueName: \"kubernetes.io/projected/457202a7-ae9f-4d06-8690-d220e532b305-kube-api-access-xf7vn\") pod \"nova-operator-controller-manager-567668f5cf-l65c5\" (UID: \"457202a7-ae9f-4d06-8690-d220e532b305\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.859790 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.863268 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.875712 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.881272 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.882430 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.890207 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.892904 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-25nnl" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.894938 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrjml\" (UniqueName: \"kubernetes.io/projected/b123191d-e55b-4ddc-90ea-abcb34c97be2-kube-api-access-vrjml\") pod \"neutron-operator-controller-manager-64ddbf8bb-27hfc\" (UID: \"b123191d-e55b-4ddc-90ea-abcb34c97be2\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.895779 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp425\" (UniqueName: \"kubernetes.io/projected/1e872b11-03d6-4d3f-8e06-e10e1e73d917-kube-api-access-lp425\") pod \"mariadb-operator-controller-manager-6994f66f48-rpbt8\" (UID: \"1e872b11-03d6-4d3f-8e06-e10e1e73d917\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.925354 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.926323 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.926456 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.930358 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.932356 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dzkxr" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.935696 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.955583 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpcsc\" (UniqueName: \"kubernetes.io/projected/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-kube-api-access-kpcsc\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.955635 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rklzj\" (UniqueName: \"kubernetes.io/projected/c55ed223-371b-409a-bcb6-8ca6d2a3c908-kube-api-access-rklzj\") pod \"swift-operator-controller-manager-68f46476f-6hfg4\" (UID: \"c55ed223-371b-409a-bcb6-8ca6d2a3c908\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.955662 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg8vb\" (UniqueName: \"kubernetes.io/projected/10e6fa53-581b-4965-8a38-c70a5c61c6d7-kube-api-access-kg8vb\") pod \"ovn-operator-controller-manager-d44cf6b75-25qtj\" (UID: \"10e6fa53-581b-4965-8a38-c70a5c61c6d7\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.955701 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwtvn\" (UniqueName: \"kubernetes.io/projected/08a4f79c-e42e-4609-b104-01b9a05ac95a-kube-api-access-fwtvn\") pod \"placement-operator-controller-manager-8497b45c89-nlqtw\" (UID: \"08a4f79c-e42e-4609-b104-01b9a05ac95a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.955729 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.955753 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hv98\" (UniqueName: \"kubernetes.io/projected/ef60eda4-7ead-499b-b70f-07a34574096f-kube-api-access-7hv98\") pod \"octavia-operator-controller-manager-69f8888797-pqrs7\" (UID: \"ef60eda4-7ead-499b-b70f-07a34574096f\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.955779 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz6h7\" (UniqueName: \"kubernetes.io/projected/49d66f3b-e451-4b73-bc6a-4b854a71a4d6-kube-api-access-bz6h7\") pod \"telemetry-operator-controller-manager-7f45b4ff68-qjpw6\" (UID: \"49d66f3b-e451-4b73-bc6a-4b854a71a4d6\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.955810 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf7vn\" (UniqueName: \"kubernetes.io/projected/457202a7-ae9f-4d06-8690-d220e532b305-kube-api-access-xf7vn\") pod \"nova-operator-controller-manager-567668f5cf-l65c5\" (UID: \"457202a7-ae9f-4d06-8690-d220e532b305\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.973572 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hv98\" (UniqueName: \"kubernetes.io/projected/ef60eda4-7ead-499b-b70f-07a34574096f-kube-api-access-7hv98\") pod \"octavia-operator-controller-manager-69f8888797-pqrs7\" (UID: \"ef60eda4-7ead-499b-b70f-07a34574096f\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.974013 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg8vb\" (UniqueName: \"kubernetes.io/projected/10e6fa53-581b-4965-8a38-c70a5c61c6d7-kube-api-access-kg8vb\") pod \"ovn-operator-controller-manager-d44cf6b75-25qtj\" (UID: \"10e6fa53-581b-4965-8a38-c70a5c61c6d7\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.976279 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-pcpk8"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.977173 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.985382 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-pcpk8"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.990746 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-zsgtv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.994507 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.994793 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf7vn\" (UniqueName: \"kubernetes.io/projected/457202a7-ae9f-4d06-8690-d220e532b305-kube-api-access-xf7vn\") pod \"nova-operator-controller-manager-567668f5cf-l65c5\" (UID: \"457202a7-ae9f-4d06-8690-d220e532b305\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.012779 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.013694 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.021149 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-vm67g" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.025240 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.057481 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpcsc\" (UniqueName: \"kubernetes.io/projected/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-kube-api-access-kpcsc\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.057541 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rklzj\" (UniqueName: \"kubernetes.io/projected/c55ed223-371b-409a-bcb6-8ca6d2a3c908-kube-api-access-rklzj\") pod \"swift-operator-controller-manager-68f46476f-6hfg4\" (UID: \"c55ed223-371b-409a-bcb6-8ca6d2a3c908\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.057585 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwtvn\" (UniqueName: \"kubernetes.io/projected/08a4f79c-e42e-4609-b104-01b9a05ac95a-kube-api-access-fwtvn\") pod \"placement-operator-controller-manager-8497b45c89-nlqtw\" (UID: \"08a4f79c-e42e-4609-b104-01b9a05ac95a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.057609 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.057636 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz6h7\" (UniqueName: \"kubernetes.io/projected/49d66f3b-e451-4b73-bc6a-4b854a71a4d6-kube-api-access-bz6h7\") pod \"telemetry-operator-controller-manager-7f45b4ff68-qjpw6\" (UID: \"49d66f3b-e451-4b73-bc6a-4b854a71a4d6\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.060055 5012 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.060143 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert podName:d6eb3922-90e6-4bb1-8caa-aac6b69c76b0 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:44.560124947 +0000 UTC m=+880.593447516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" (UID: "d6eb3922-90e6-4bb1-8caa-aac6b69c76b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.076664 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.077485 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpcsc\" (UniqueName: \"kubernetes.io/projected/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-kube-api-access-kpcsc\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.081413 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rklzj\" (UniqueName: \"kubernetes.io/projected/c55ed223-371b-409a-bcb6-8ca6d2a3c908-kube-api-access-rklzj\") pod \"swift-operator-controller-manager-68f46476f-6hfg4\" (UID: \"c55ed223-371b-409a-bcb6-8ca6d2a3c908\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.082397 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz6h7\" (UniqueName: \"kubernetes.io/projected/49d66f3b-e451-4b73-bc6a-4b854a71a4d6-kube-api-access-bz6h7\") pod \"telemetry-operator-controller-manager-7f45b4ff68-qjpw6\" (UID: \"49d66f3b-e451-4b73-bc6a-4b854a71a4d6\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.099965 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwtvn\" (UniqueName: \"kubernetes.io/projected/08a4f79c-e42e-4609-b104-01b9a05ac95a-kube-api-access-fwtvn\") pod \"placement-operator-controller-manager-8497b45c89-nlqtw\" (UID: \"08a4f79c-e42e-4609-b104-01b9a05ac95a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.113678 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.155137 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.155676 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.157816 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.165960 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrmp2\" (UniqueName: \"kubernetes.io/projected/739941d0-4bff-4dae-8f01-636386a37dd0-kube-api-access-mrmp2\") pod \"watcher-operator-controller-manager-5db88f68c-z5r47\" (UID: \"739941d0-4bff-4dae-8f01-636386a37dd0\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.166098 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjlk4\" (UniqueName: \"kubernetes.io/projected/d1f124a8-4132-458d-a5a5-1839d31e7772-kube-api-access-gjlk4\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.166251 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f97jw\" (UniqueName: \"kubernetes.io/projected/73e25e30-860d-4faf-b1f3-bc284f7189d1-kube-api-access-f97jw\") pod \"test-operator-controller-manager-7866795846-pcpk8\" (UID: \"73e25e30-860d-4faf-b1f3-bc284f7189d1\") " pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.167330 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.172402 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.196744 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.197030 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.194636 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.178086 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.173453 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.174992 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-d8sxf" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.209547 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.257586 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.300440 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.301395 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.301729 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.301906 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.301961 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.302027 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjlk4\" (UniqueName: \"kubernetes.io/projected/d1f124a8-4132-458d-a5a5-1839d31e7772-kube-api-access-gjlk4\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.302050 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrmp2\" (UniqueName: \"kubernetes.io/projected/739941d0-4bff-4dae-8f01-636386a37dd0-kube-api-access-mrmp2\") pod \"watcher-operator-controller-manager-5db88f68c-z5r47\" (UID: \"739941d0-4bff-4dae-8f01-636386a37dd0\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.302077 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.302100 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f97jw\" (UniqueName: \"kubernetes.io/projected/73e25e30-860d-4faf-b1f3-bc284f7189d1-kube-api-access-f97jw\") pod \"test-operator-controller-manager-7866795846-pcpk8\" (UID: \"73e25e30-860d-4faf-b1f3-bc284f7189d1\") " pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.303091 5012 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.303137 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:44.803120042 +0000 UTC m=+880.836442611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "metrics-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.303477 5012 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.303524 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:44.803506501 +0000 UTC m=+880.836829070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.303566 5012 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.303585 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert podName:996bfd61-486b-432d-9e09-d3a90ff9124c nodeName:}" failed. No retries permitted until 2026-02-19 05:39:45.303578563 +0000 UTC m=+881.336901132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert") pod "infra-operator-controller-manager-79d975b745-cp8kx" (UID: "996bfd61-486b-432d-9e09-d3a90ff9124c") : secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.306703 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-tj57r" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.315773 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.322665 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrmp2\" (UniqueName: \"kubernetes.io/projected/739941d0-4bff-4dae-8f01-636386a37dd0-kube-api-access-mrmp2\") pod \"watcher-operator-controller-manager-5db88f68c-z5r47\" (UID: \"739941d0-4bff-4dae-8f01-636386a37dd0\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.331931 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjlk4\" (UniqueName: \"kubernetes.io/projected/d1f124a8-4132-458d-a5a5-1839d31e7772-kube-api-access-gjlk4\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.336120 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f97jw\" (UniqueName: \"kubernetes.io/projected/73e25e30-860d-4faf-b1f3-bc284f7189d1-kube-api-access-f97jw\") pod \"test-operator-controller-manager-7866795846-pcpk8\" (UID: \"73e25e30-860d-4faf-b1f3-bc284f7189d1\") " pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.352995 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.363905 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.403250 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmv74\" (UniqueName: \"kubernetes.io/projected/4a3cde05-282a-4c65-9570-74d04c71a034-kube-api-access-nmv74\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mqc2w\" (UID: \"4a3cde05-282a-4c65-9570-74d04c71a034\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.404506 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.435106 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.435153 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.503445 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.504454 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmv74\" (UniqueName: \"kubernetes.io/projected/4a3cde05-282a-4c65-9570-74d04c71a034-kube-api-access-nmv74\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mqc2w\" (UID: \"4a3cde05-282a-4c65-9570-74d04c71a034\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.606966 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.607124 5012 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.607171 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert podName:d6eb3922-90e6-4bb1-8caa-aac6b69c76b0 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:45.607157193 +0000 UTC m=+881.640479762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" (UID: "d6eb3922-90e6-4bb1-8caa-aac6b69c76b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.611592 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmv74\" (UniqueName: \"kubernetes.io/projected/4a3cde05-282a-4c65-9570-74d04c71a034-kube-api-access-nmv74\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mqc2w\" (UID: \"4a3cde05-282a-4c65-9570-74d04c71a034\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.615843 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.616250 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.673394 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" Feb 19 05:39:44 crc kubenswrapper[5012]: W0219 05:39:44.688122 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b3edb91_d9bc_4f6f_9cf5_5d40f05bf3be.slice/crio-04b965a1d44ec7156a02b6ebefe193ac111779d07a2f8efd2f7b6560532a1261 WatchSource:0}: Error finding container 04b965a1d44ec7156a02b6ebefe193ac111779d07a2f8efd2f7b6560532a1261: Status 404 returned error can't find the container with id 04b965a1d44ec7156a02b6ebefe193ac111779d07a2f8efd2f7b6560532a1261 Feb 19 05:39:44 crc kubenswrapper[5012]: W0219 05:39:44.691956 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11d49fcd_6e31_47e5_84a1_c6ae972e13cb.slice/crio-4550c81aa85b109fc7362492bebfcc80ab165396c0523f653f1a8fcca7cd1287 WatchSource:0}: Error finding container 4550c81aa85b109fc7362492bebfcc80ab165396c0523f653f1a8fcca7cd1287: Status 404 returned error can't find the container with id 4550c81aa85b109fc7362492bebfcc80ab165396c0523f653f1a8fcca7cd1287 Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.810194 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.810558 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.812825 5012 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.812875 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:45.812860549 +0000 UTC m=+881.846183118 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.813501 5012 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.813549 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:45.813533675 +0000 UTC m=+881.846856244 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "metrics-server-cert" not found Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.002221 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5"] Feb 19 05:39:45 crc kubenswrapper[5012]: W0219 05:39:45.011751 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9e07b56_2724_4046_8a60_81b751fb0588.slice/crio-6bdfb2b40753443c7b1d250190983f4ab67ab73262aa89222bc8a40efd763009 WatchSource:0}: Error finding container 6bdfb2b40753443c7b1d250190983f4ab67ab73262aa89222bc8a40efd763009: Status 404 returned error can't find the container with id 6bdfb2b40753443c7b1d250190983f4ab67ab73262aa89222bc8a40efd763009 Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.048887 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.105524 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv"] Feb 19 05:39:45 crc kubenswrapper[5012]: W0219 05:39:45.111999 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8629b5e4_e6a8_4c47_b76b_f58a26b42912.slice/crio-dfe9e98df34d2a579dbd4d4cb78781090d22da89b2c47977e6a49d95a4098d34 WatchSource:0}: Error finding container dfe9e98df34d2a579dbd4d4cb78781090d22da89b2c47977e6a49d95a4098d34: Status 404 returned error can't find the container with id dfe9e98df34d2a579dbd4d4cb78781090d22da89b2c47977e6a49d95a4098d34 Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.130977 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.140008 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc"] Feb 19 05:39:45 crc kubenswrapper[5012]: W0219 05:39:45.141890 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cc1b41b_fbf6_4d0c_b721_dcad09c03feb.slice/crio-76125640d3b91baa619b96b41ac081d093fa1637b1f1f96011116e378516421c WatchSource:0}: Error finding container 76125640d3b91baa619b96b41ac081d093fa1637b1f1f96011116e378516421c: Status 404 returned error can't find the container with id 76125640d3b91baa619b96b41ac081d093fa1637b1f1f96011116e378516421c Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.145468 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.315648 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.315832 5012 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.315886 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert podName:996bfd61-486b-432d-9e09-d3a90ff9124c nodeName:}" failed. No retries permitted until 2026-02-19 05:39:47.315870724 +0000 UTC m=+883.349193293 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert") pod "infra-operator-controller-manager-79d975b745-cp8kx" (UID: "996bfd61-486b-432d-9e09-d3a90ff9124c") : secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.322068 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" event={"ID":"e9e07b56-2724-4046-8a60-81b751fb0588","Type":"ContainerStarted","Data":"6bdfb2b40753443c7b1d250190983f4ab67ab73262aa89222bc8a40efd763009"} Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.328477 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.334065 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" event={"ID":"0cc1b41b-fbf6-4d0c-b721-dcad09c03feb","Type":"ContainerStarted","Data":"76125640d3b91baa619b96b41ac081d093fa1637b1f1f96011116e378516421c"} Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.335034 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.340643 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7"] Feb 19 05:39:45 crc kubenswrapper[5012]: W0219 05:39:45.344519 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10e6fa53_581b_4965_8a38_c70a5c61c6d7.slice/crio-28860c4863093c32d12a4a6a58e8376a1127942d6a6b57864c8f33e0d6731121 WatchSource:0}: Error finding container 28860c4863093c32d12a4a6a58e8376a1127942d6a6b57864c8f33e0d6731121: Status 404 returned error can't find the container with id 28860c4863093c32d12a4a6a58e8376a1127942d6a6b57864c8f33e0d6731121 Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.350952 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.350986 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" event={"ID":"8629b5e4-e6a8-4c47-b76b-f58a26b42912","Type":"ContainerStarted","Data":"dfe9e98df34d2a579dbd4d4cb78781090d22da89b2c47977e6a49d95a4098d34"} Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.352398 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" event={"ID":"dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43","Type":"ContainerStarted","Data":"1bef771e352cf5e8c82b4ae4872bc4d5992083e4f205de0a4ac903c26530988e"} Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.352468 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.353379 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" event={"ID":"bfca307c-9b00-4c12-bdd6-a394b7cc7cfd","Type":"ContainerStarted","Data":"544ef579d1e51bbd16a64d2df2d8493aed6d7ff93c4852d86e3d6fb0786b05a6"} Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.357831 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" event={"ID":"11d49fcd-6e31-47e5-84a1-c6ae972e13cb","Type":"ContainerStarted","Data":"4550c81aa85b109fc7362492bebfcc80ab165396c0523f653f1a8fcca7cd1287"} Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.358710 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" event={"ID":"b123191d-e55b-4ddc-90ea-abcb34c97be2","Type":"ContainerStarted","Data":"809aed11609ee8d1d19aef2d9e34018c8adb897dd55aa5e6975b2071e559e959"} Feb 19 05:39:45 crc kubenswrapper[5012]: W0219 05:39:45.358824 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08a4f79c_e42e_4609_b104_01b9a05ac95a.slice/crio-af0cf467b90a785a8db5fe5e9139dfd99f6f63c686785298c2075d602de149d3 WatchSource:0}: Error finding container af0cf467b90a785a8db5fe5e9139dfd99f6f63c686785298c2075d602de149d3: Status 404 returned error can't find the container with id af0cf467b90a785a8db5fe5e9139dfd99f6f63c686785298c2075d602de149d3 Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.359495 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.361499 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" event={"ID":"1e872b11-03d6-4d3f-8e06-e10e1e73d917","Type":"ContainerStarted","Data":"0042f4c98fe7aa4b93329f86065d478f825bc27a14c77266fa74b1c3feae03f7"} Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.364421 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" event={"ID":"8af03a54-ad7a-4684-b5a6-ba83f410e6ed","Type":"ContainerStarted","Data":"c517c1f32113b9b24e196a0813209ed6df8ce8b867be34d0f98f119c6be187a0"} Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.367609 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" event={"ID":"8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be","Type":"ContainerStarted","Data":"04b965a1d44ec7156a02b6ebefe193ac111779d07a2f8efd2f7b6560532a1261"} Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.380589 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wwrcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-csct6_openstack-operators(4f281b5b-b656-4d4a-b628-d4bfe4fc94f9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 05:39:45 crc kubenswrapper[5012]: W0219 05:39:45.381056 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef60eda4_7ead_499b_b70f_07a34574096f.slice/crio-5cc777d9a45a187848c8e7aad90fa31e325037e623c8bfbec46563c614780937 WatchSource:0}: Error finding container 5cc777d9a45a187848c8e7aad90fa31e325037e623c8bfbec46563c614780937: Status 404 returned error can't find the container with id 5cc777d9a45a187848c8e7aad90fa31e325037e623c8bfbec46563c614780937 Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.382148 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" podUID="4f281b5b-b656-4d4a-b628-d4bfe4fc94f9" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.393937 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7hv98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-pqrs7_openstack-operators(ef60eda4-7ead-499b-b70f-07a34574096f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.395685 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" podUID="ef60eda4-7ead-499b-b70f-07a34574096f" Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.460766 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.469138 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.476939 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w"] Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.477429 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mrmp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-z5r47_openstack-operators(739941d0-4bff-4dae-8f01-636386a37dd0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.478920 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" podUID="739941d0-4bff-4dae-8f01-636386a37dd0" Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.482843 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-pcpk8"] Feb 19 05:39:45 crc kubenswrapper[5012]: W0219 05:39:45.484253 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a3cde05_282a_4c65_9570_74d04c71a034.slice/crio-3e2225296f9d9a7e0dc5481e11a191da5afcbeca118dc2ffa47f3c89fff56545 WatchSource:0}: Error finding container 3e2225296f9d9a7e0dc5481e11a191da5afcbeca118dc2ffa47f3c89fff56545: Status 404 returned error can't find the container with id 3e2225296f9d9a7e0dc5481e11a191da5afcbeca118dc2ffa47f3c89fff56545 Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.487371 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nmv74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-mqc2w_openstack-operators(4a3cde05-282a-4c65-9570-74d04c71a034): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.488541 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" podUID="4a3cde05-282a-4c65-9570-74d04c71a034" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.491952 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f97jw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-pcpk8_openstack-operators(73e25e30-860d-4faf-b1f3-bc284f7189d1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.494678 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" podUID="73e25e30-860d-4faf-b1f3-bc284f7189d1" Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.620148 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.620330 5012 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.620393 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert podName:d6eb3922-90e6-4bb1-8caa-aac6b69c76b0 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:47.620377416 +0000 UTC m=+883.653699985 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" (UID: "d6eb3922-90e6-4bb1-8caa-aac6b69c76b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.833547 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.834451 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.834706 5012 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.834777 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:47.834762875 +0000 UTC m=+883.868085434 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "webhook-server-cert" not found Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.836133 5012 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.836167 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:47.836159149 +0000 UTC m=+883.869481718 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "metrics-server-cert" not found Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.381272 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" event={"ID":"457202a7-ae9f-4d06-8690-d220e532b305","Type":"ContainerStarted","Data":"6177ccacda2cb011d9b0dbbb542a849d475f022c0131ea89a1858e858cd5077c"} Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.383789 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" event={"ID":"10e6fa53-581b-4965-8a38-c70a5c61c6d7","Type":"ContainerStarted","Data":"28860c4863093c32d12a4a6a58e8376a1127942d6a6b57864c8f33e0d6731121"} Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.386192 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" event={"ID":"4a3cde05-282a-4c65-9570-74d04c71a034","Type":"ContainerStarted","Data":"3e2225296f9d9a7e0dc5481e11a191da5afcbeca118dc2ffa47f3c89fff56545"} Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.392135 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" event={"ID":"4f281b5b-b656-4d4a-b628-d4bfe4fc94f9","Type":"ContainerStarted","Data":"9e68082543b88eb8692c2971ce037e9fbe73463f2950d28f7d766fdc47355f5d"} Feb 19 05:39:46 crc kubenswrapper[5012]: E0219 05:39:46.394125 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" podUID="4f281b5b-b656-4d4a-b628-d4bfe4fc94f9" Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.395478 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" event={"ID":"08a4f79c-e42e-4609-b104-01b9a05ac95a","Type":"ContainerStarted","Data":"af0cf467b90a785a8db5fe5e9139dfd99f6f63c686785298c2075d602de149d3"} Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.396962 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" event={"ID":"49d66f3b-e451-4b73-bc6a-4b854a71a4d6","Type":"ContainerStarted","Data":"5c8ea10b6114011fa0d4d80e27fa65b7a59bb00725ae56c16c0f2ef7a012c48d"} Feb 19 05:39:46 crc kubenswrapper[5012]: E0219 05:39:46.397733 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" podUID="4a3cde05-282a-4c65-9570-74d04c71a034" Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.401441 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" event={"ID":"739941d0-4bff-4dae-8f01-636386a37dd0","Type":"ContainerStarted","Data":"5efab640d65cac0525e62cc953e3c450515f956276b7be6332d4a135bc77b341"} Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.402629 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" event={"ID":"ef60eda4-7ead-499b-b70f-07a34574096f","Type":"ContainerStarted","Data":"5cc777d9a45a187848c8e7aad90fa31e325037e623c8bfbec46563c614780937"} Feb 19 05:39:46 crc kubenswrapper[5012]: E0219 05:39:46.402903 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" podUID="739941d0-4bff-4dae-8f01-636386a37dd0" Feb 19 05:39:46 crc kubenswrapper[5012]: E0219 05:39:46.404355 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" podUID="ef60eda4-7ead-499b-b70f-07a34574096f" Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.404701 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" event={"ID":"c55ed223-371b-409a-bcb6-8ca6d2a3c908","Type":"ContainerStarted","Data":"75b0635431c48105bf4783209996d0f1630c0d67ceb2343139c64539cb777c14"} Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.405601 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" event={"ID":"73e25e30-860d-4faf-b1f3-bc284f7189d1","Type":"ContainerStarted","Data":"4f552f87075ec49e67fc4271c11ee0d9390ff98eca4aeb8198617b81efbec60b"} Feb 19 05:39:46 crc kubenswrapper[5012]: E0219 05:39:46.407068 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" podUID="73e25e30-860d-4faf-b1f3-bc284f7189d1" Feb 19 05:39:47 crc kubenswrapper[5012]: I0219 05:39:47.359155 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.359399 5012 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.360117 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert podName:996bfd61-486b-432d-9e09-d3a90ff9124c nodeName:}" failed. No retries permitted until 2026-02-19 05:39:51.359530691 +0000 UTC m=+887.392853260 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert") pod "infra-operator-controller-manager-79d975b745-cp8kx" (UID: "996bfd61-486b-432d-9e09-d3a90ff9124c") : secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.420416 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" podUID="4f281b5b-b656-4d4a-b628-d4bfe4fc94f9" Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.422888 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" podUID="ef60eda4-7ead-499b-b70f-07a34574096f" Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.422965 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" podUID="739941d0-4bff-4dae-8f01-636386a37dd0" Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.422989 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" podUID="4a3cde05-282a-4c65-9570-74d04c71a034" Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.423144 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" podUID="73e25e30-860d-4faf-b1f3-bc284f7189d1" Feb 19 05:39:47 crc kubenswrapper[5012]: I0219 05:39:47.665623 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.666420 5012 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.666576 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert podName:d6eb3922-90e6-4bb1-8caa-aac6b69c76b0 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:51.666557085 +0000 UTC m=+887.699879654 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" (UID: "d6eb3922-90e6-4bb1-8caa-aac6b69c76b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:47 crc kubenswrapper[5012]: I0219 05:39:47.868941 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:47 crc kubenswrapper[5012]: I0219 05:39:47.869613 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.869120 5012 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.869844 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:51.869821503 +0000 UTC m=+887.903144072 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "metrics-server-cert" not found Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.869703 5012 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.869979 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:51.869968046 +0000 UTC m=+887.903290615 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "webhook-server-cert" not found Feb 19 05:39:51 crc kubenswrapper[5012]: I0219 05:39:51.428346 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:51 crc kubenswrapper[5012]: E0219 05:39:51.428479 5012 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:51 crc kubenswrapper[5012]: E0219 05:39:51.428609 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert podName:996bfd61-486b-432d-9e09-d3a90ff9124c nodeName:}" failed. No retries permitted until 2026-02-19 05:39:59.428595781 +0000 UTC m=+895.461918350 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert") pod "infra-operator-controller-manager-79d975b745-cp8kx" (UID: "996bfd61-486b-432d-9e09-d3a90ff9124c") : secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:51 crc kubenswrapper[5012]: I0219 05:39:51.737627 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:51 crc kubenswrapper[5012]: E0219 05:39:51.741032 5012 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:51 crc kubenswrapper[5012]: E0219 05:39:51.745978 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert podName:d6eb3922-90e6-4bb1-8caa-aac6b69c76b0 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:59.745947966 +0000 UTC m=+895.779270535 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" (UID: "d6eb3922-90e6-4bb1-8caa-aac6b69c76b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:51 crc kubenswrapper[5012]: I0219 05:39:51.947362 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:51 crc kubenswrapper[5012]: I0219 05:39:51.947416 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:51 crc kubenswrapper[5012]: E0219 05:39:51.947599 5012 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 05:39:51 crc kubenswrapper[5012]: E0219 05:39:51.947649 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:59.947632794 +0000 UTC m=+895.980955363 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "webhook-server-cert" not found Feb 19 05:39:51 crc kubenswrapper[5012]: E0219 05:39:51.947754 5012 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 05:39:51 crc kubenswrapper[5012]: E0219 05:39:51.947900 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:59.94786227 +0000 UTC m=+895.981184899 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "metrics-server-cert" not found Feb 19 05:39:57 crc kubenswrapper[5012]: E0219 05:39:57.206683 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 19 05:39:57 crc kubenswrapper[5012]: E0219 05:39:57.207527 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vpjpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-xzk2n_openstack-operators(0cc1b41b-fbf6-4d0c-b721-dcad09c03feb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:39:57 crc kubenswrapper[5012]: E0219 05:39:57.208789 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" podUID="0cc1b41b-fbf6-4d0c-b721-dcad09c03feb" Feb 19 05:39:57 crc kubenswrapper[5012]: E0219 05:39:57.495078 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" podUID="0cc1b41b-fbf6-4d0c-b721-dcad09c03feb" Feb 19 05:39:57 crc kubenswrapper[5012]: E0219 05:39:57.834522 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 19 05:39:57 crc kubenswrapper[5012]: E0219 05:39:57.834721 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rklzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-6hfg4_openstack-operators(c55ed223-371b-409a-bcb6-8ca6d2a3c908): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:39:57 crc kubenswrapper[5012]: E0219 05:39:57.836553 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" podUID="c55ed223-371b-409a-bcb6-8ca6d2a3c908" Feb 19 05:39:58 crc kubenswrapper[5012]: E0219 05:39:58.502393 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" podUID="c55ed223-371b-409a-bcb6-8ca6d2a3c908" Feb 19 05:39:59 crc kubenswrapper[5012]: I0219 05:39:59.467182 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.467443 5012 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.467543 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert podName:996bfd61-486b-432d-9e09-d3a90ff9124c nodeName:}" failed. No retries permitted until 2026-02-19 05:40:15.467521344 +0000 UTC m=+911.500843923 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert") pod "infra-operator-controller-manager-79d975b745-cp8kx" (UID: "996bfd61-486b-432d-9e09-d3a90ff9124c") : secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:59 crc kubenswrapper[5012]: I0219 05:39:59.772992 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.773212 5012 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.773747 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert podName:d6eb3922-90e6-4bb1-8caa-aac6b69c76b0 nodeName:}" failed. No retries permitted until 2026-02-19 05:40:15.773726478 +0000 UTC m=+911.807049057 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" (UID: "d6eb3922-90e6-4bb1-8caa-aac6b69c76b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.919422 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.919694 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7k272,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987464f4-qzq7x_openstack-operators(8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.920908 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" podUID="8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be" Feb 19 05:39:59 crc kubenswrapper[5012]: I0219 05:39:59.978201 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:59 crc kubenswrapper[5012]: I0219 05:39:59.978329 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.978441 5012 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.978538 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:40:15.978513613 +0000 UTC m=+912.011836192 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "metrics-server-cert" not found Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.978597 5012 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.978691 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:40:15.978670227 +0000 UTC m=+912.011992806 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "webhook-server-cert" not found Feb 19 05:40:00 crc kubenswrapper[5012]: E0219 05:40:00.516127 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" podUID="8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be" Feb 19 05:40:00 crc kubenswrapper[5012]: E0219 05:40:00.707536 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" Feb 19 05:40:00 crc kubenswrapper[5012]: E0219 05:40:00.707836 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bz6h7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-qjpw6_openstack-operators(49d66f3b-e451-4b73-bc6a-4b854a71a4d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:40:00 crc kubenswrapper[5012]: E0219 05:40:00.709156 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" podUID="49d66f3b-e451-4b73-bc6a-4b854a71a4d6" Feb 19 05:40:01 crc kubenswrapper[5012]: E0219 05:40:01.223047 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 19 05:40:01 crc kubenswrapper[5012]: E0219 05:40:01.223282 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jcl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-dgldv_openstack-operators(8629b5e4-e6a8-4c47-b76b-f58a26b42912): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:40:01 crc kubenswrapper[5012]: E0219 05:40:01.224559 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" podUID="8629b5e4-e6a8-4c47-b76b-f58a26b42912" Feb 19 05:40:01 crc kubenswrapper[5012]: E0219 05:40:01.525471 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" podUID="49d66f3b-e451-4b73-bc6a-4b854a71a4d6" Feb 19 05:40:01 crc kubenswrapper[5012]: E0219 05:40:01.527728 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" podUID="8629b5e4-e6a8-4c47-b76b-f58a26b42912" Feb 19 05:40:01 crc kubenswrapper[5012]: E0219 05:40:01.838588 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 19 05:40:01 crc kubenswrapper[5012]: E0219 05:40:01.838754 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r7rs7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-9zkvx_openstack-operators(dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:40:01 crc kubenswrapper[5012]: E0219 05:40:01.840018 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" podUID="dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43" Feb 19 05:40:02 crc kubenswrapper[5012]: E0219 05:40:02.529146 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" podUID="dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43" Feb 19 05:40:03 crc kubenswrapper[5012]: E0219 05:40:03.088052 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 19 05:40:03 crc kubenswrapper[5012]: E0219 05:40:03.088926 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xf7vn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-l65c5_openstack-operators(457202a7-ae9f-4d06-8690-d220e532b305): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:40:03 crc kubenswrapper[5012]: E0219 05:40:03.090276 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" podUID="457202a7-ae9f-4d06-8690-d220e532b305" Feb 19 05:40:03 crc kubenswrapper[5012]: E0219 05:40:03.537847 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" podUID="457202a7-ae9f-4d06-8690-d220e532b305" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.568669 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" event={"ID":"bfca307c-9b00-4c12-bdd6-a394b7cc7cfd","Type":"ContainerStarted","Data":"ce9ba9dd3a9689fda25dde0374abb8eeef49a7a1b960f3335da940769d1dfb72"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.568938 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.570019 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" event={"ID":"739941d0-4bff-4dae-8f01-636386a37dd0","Type":"ContainerStarted","Data":"41e527e51cfa6c21b3b0a1d47834362ed3c08eda72c23067a5b83ed7da4219aa"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.570187 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.571293 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" event={"ID":"11d49fcd-6e31-47e5-84a1-c6ae972e13cb","Type":"ContainerStarted","Data":"52b7b6abaff066152390196213b574b1a471cf22ba37646da14a6a1cd804c17c"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.571482 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.573282 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" event={"ID":"ef60eda4-7ead-499b-b70f-07a34574096f","Type":"ContainerStarted","Data":"c1ba4d205aa3ee79d3975e9041ff63bbcdfa0dba4991ea498b0e241ec8cd09c3"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.573459 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.580465 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" event={"ID":"b123191d-e55b-4ddc-90ea-abcb34c97be2","Type":"ContainerStarted","Data":"2f7f8ffa24601d945eb9127adf1b9d648590be6416b6871afeb7c96a15fb7634"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.580588 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.585850 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" event={"ID":"1e872b11-03d6-4d3f-8e06-e10e1e73d917","Type":"ContainerStarted","Data":"8454ddceb9f7c7432891d0362d364721444a3d3389baa49984ec90dd9bedcbfc"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.585983 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.587747 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" event={"ID":"08a4f79c-e42e-4609-b104-01b9a05ac95a","Type":"ContainerStarted","Data":"13ded6e10183374ce42cdb2c87fb3d14305ffaa1c8f76c704c4b93262623a139"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.587807 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.588504 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" podStartSLOduration=4.924003061 podStartE2EDuration="23.588494313s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:44.404278634 +0000 UTC m=+880.437601203" lastFinishedPulling="2026-02-19 05:40:03.068769886 +0000 UTC m=+899.102092455" observedRunningTime="2026-02-19 05:40:06.5875239 +0000 UTC m=+902.620846469" watchObservedRunningTime="2026-02-19 05:40:06.588494313 +0000 UTC m=+902.621816882" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.589863 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" event={"ID":"4f281b5b-b656-4d4a-b628-d4bfe4fc94f9","Type":"ContainerStarted","Data":"f3eebdfd0c380fa6d309b6b90709dd1d2ca1551ee3ebd735ab4b0cd719b74a47"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.590018 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.592510 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" event={"ID":"e9e07b56-2724-4046-8a60-81b751fb0588","Type":"ContainerStarted","Data":"18666c2dd62284e9696eeae27bf48669d12cd0fa8fb141aa99cf14dfddb319a2"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.592849 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.593928 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" event={"ID":"10e6fa53-581b-4965-8a38-c70a5c61c6d7","Type":"ContainerStarted","Data":"21830277d8d70a6ba7a48febc68f048f8f86cc5bc5285664e462d39d8897c7bb"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.594582 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.595578 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" event={"ID":"8af03a54-ad7a-4684-b5a6-ba83f410e6ed","Type":"ContainerStarted","Data":"6f7a49a30d338c4b70774021b80f8d34e54904a95a68d1fe18abac136042c3c1"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.595910 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.604243 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" event={"ID":"73e25e30-860d-4faf-b1f3-bc284f7189d1","Type":"ContainerStarted","Data":"26fbdc9144715d1125ed43062df9ba915be8d4469b9a861910822792449590b8"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.604424 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.605109 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" podStartSLOduration=6.929182781 podStartE2EDuration="23.605094478s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.146677665 +0000 UTC m=+881.180000234" lastFinishedPulling="2026-02-19 05:40:01.822589362 +0000 UTC m=+897.855911931" observedRunningTime="2026-02-19 05:40:06.599296186 +0000 UTC m=+902.632618755" watchObservedRunningTime="2026-02-19 05:40:06.605094478 +0000 UTC m=+902.638417047" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.614970 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" podStartSLOduration=3.49723525 podStartE2EDuration="23.614958618s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.141750035 +0000 UTC m=+881.175072594" lastFinishedPulling="2026-02-19 05:40:05.259473403 +0000 UTC m=+901.292795962" observedRunningTime="2026-02-19 05:40:06.612693962 +0000 UTC m=+902.646016531" watchObservedRunningTime="2026-02-19 05:40:06.614958618 +0000 UTC m=+902.648281187" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.631806 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" podStartSLOduration=6.505899736 podStartE2EDuration="23.631790097s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:44.696725832 +0000 UTC m=+880.730048411" lastFinishedPulling="2026-02-19 05:40:01.822616203 +0000 UTC m=+897.855938772" observedRunningTime="2026-02-19 05:40:06.62819816 +0000 UTC m=+902.661520729" watchObservedRunningTime="2026-02-19 05:40:06.631790097 +0000 UTC m=+902.665112666" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.648613 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" podStartSLOduration=3.7097856350000002 podStartE2EDuration="23.648600127s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.39376248 +0000 UTC m=+881.427085049" lastFinishedPulling="2026-02-19 05:40:05.332576972 +0000 UTC m=+901.365899541" observedRunningTime="2026-02-19 05:40:06.647034338 +0000 UTC m=+902.680356907" watchObservedRunningTime="2026-02-19 05:40:06.648600127 +0000 UTC m=+902.681922696" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.665114 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" podStartSLOduration=3.74742978 podStartE2EDuration="23.665101428s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.477276053 +0000 UTC m=+881.510598622" lastFinishedPulling="2026-02-19 05:40:05.394947701 +0000 UTC m=+901.428270270" observedRunningTime="2026-02-19 05:40:06.664968585 +0000 UTC m=+902.698291154" watchObservedRunningTime="2026-02-19 05:40:06.665101428 +0000 UTC m=+902.698423997" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.682863 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" podStartSLOduration=5.427503557 podStartE2EDuration="23.68284722s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.352193878 +0000 UTC m=+881.385516447" lastFinishedPulling="2026-02-19 05:40:03.607537541 +0000 UTC m=+899.640860110" observedRunningTime="2026-02-19 05:40:06.681121238 +0000 UTC m=+902.714443807" watchObservedRunningTime="2026-02-19 05:40:06.68284722 +0000 UTC m=+902.716169789" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.702030 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" podStartSLOduration=4.798391923 podStartE2EDuration="23.702011267s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.360750216 +0000 UTC m=+881.394072775" lastFinishedPulling="2026-02-19 05:40:04.26436955 +0000 UTC m=+900.297692119" observedRunningTime="2026-02-19 05:40:06.694111664 +0000 UTC m=+902.727434233" watchObservedRunningTime="2026-02-19 05:40:06.702011267 +0000 UTC m=+902.735333836" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.742050 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" podStartSLOduration=3.928461377 podStartE2EDuration="23.742032371s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.491765395 +0000 UTC m=+881.525087954" lastFinishedPulling="2026-02-19 05:40:05.305336379 +0000 UTC m=+901.338658948" observedRunningTime="2026-02-19 05:40:06.716599142 +0000 UTC m=+902.749921701" watchObservedRunningTime="2026-02-19 05:40:06.742032371 +0000 UTC m=+902.775354940" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.743512 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" podStartSLOduration=6.407677267 podStartE2EDuration="23.743505817s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:44.486766792 +0000 UTC m=+880.520089361" lastFinishedPulling="2026-02-19 05:40:01.822595342 +0000 UTC m=+897.855917911" observedRunningTime="2026-02-19 05:40:06.738955446 +0000 UTC m=+902.772278015" watchObservedRunningTime="2026-02-19 05:40:06.743505817 +0000 UTC m=+902.776828386" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.781835 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" podStartSLOduration=3.85547489 podStartE2EDuration="23.781816619s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.380408005 +0000 UTC m=+881.413730574" lastFinishedPulling="2026-02-19 05:40:05.306749734 +0000 UTC m=+901.340072303" observedRunningTime="2026-02-19 05:40:06.763583116 +0000 UTC m=+902.796905685" watchObservedRunningTime="2026-02-19 05:40:06.781816619 +0000 UTC m=+902.815139188" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.782904 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" podStartSLOduration=6.337480927 podStartE2EDuration="23.782898196s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.013552484 +0000 UTC m=+881.046875053" lastFinishedPulling="2026-02-19 05:40:02.458969743 +0000 UTC m=+898.492292322" observedRunningTime="2026-02-19 05:40:06.781537623 +0000 UTC m=+902.814860192" watchObservedRunningTime="2026-02-19 05:40:06.782898196 +0000 UTC m=+902.816220765" Feb 19 05:40:08 crc kubenswrapper[5012]: I0219 05:40:08.852432 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cxhkh"] Feb 19 05:40:08 crc kubenswrapper[5012]: I0219 05:40:08.856480 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:08 crc kubenswrapper[5012]: I0219 05:40:08.863368 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cxhkh"] Feb 19 05:40:08 crc kubenswrapper[5012]: I0219 05:40:08.930518 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-utilities\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:08 crc kubenswrapper[5012]: I0219 05:40:08.930603 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-catalog-content\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:08 crc kubenswrapper[5012]: I0219 05:40:08.930626 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcmmp\" (UniqueName: \"kubernetes.io/projected/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-kube-api-access-pcmmp\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.032041 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-catalog-content\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.032079 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcmmp\" (UniqueName: \"kubernetes.io/projected/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-kube-api-access-pcmmp\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.032199 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-utilities\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.032693 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-catalog-content\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.032721 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-utilities\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.053336 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcmmp\" (UniqueName: \"kubernetes.io/projected/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-kube-api-access-pcmmp\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.178619 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.646753 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" event={"ID":"4a3cde05-282a-4c65-9570-74d04c71a034","Type":"ContainerStarted","Data":"db85e8efce83c4aaf4a4ce23309b971ff444610da3fcb1309b7fbd49329e16ad"} Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.647609 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cxhkh"] Feb 19 05:40:09 crc kubenswrapper[5012]: W0219 05:40:09.669590 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde3ffb30_20cd_4e13_a51c_9d159b1ac3c4.slice/crio-a2c7c4a19968ec934eb2d82b6fd6c582a0e3ba562c22d8eb6cdb58faa6826649 WatchSource:0}: Error finding container a2c7c4a19968ec934eb2d82b6fd6c582a0e3ba562c22d8eb6cdb58faa6826649: Status 404 returned error can't find the container with id a2c7c4a19968ec934eb2d82b6fd6c582a0e3ba562c22d8eb6cdb58faa6826649 Feb 19 05:40:10 crc kubenswrapper[5012]: I0219 05:40:10.655355 5012 generic.go:334] "Generic (PLEG): container finished" podID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerID="63155282e6f7bbbff3f0ae2f5fef6845f4cb91c8ffd7d7fe82a1ae65c034544e" exitCode=0 Feb 19 05:40:10 crc kubenswrapper[5012]: I0219 05:40:10.655496 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxhkh" event={"ID":"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4","Type":"ContainerDied","Data":"63155282e6f7bbbff3f0ae2f5fef6845f4cb91c8ffd7d7fe82a1ae65c034544e"} Feb 19 05:40:10 crc kubenswrapper[5012]: I0219 05:40:10.655733 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxhkh" event={"ID":"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4","Type":"ContainerStarted","Data":"a2c7c4a19968ec934eb2d82b6fd6c582a0e3ba562c22d8eb6cdb58faa6826649"} Feb 19 05:40:10 crc kubenswrapper[5012]: I0219 05:40:10.691640 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" podStartSLOduration=3.327943211 podStartE2EDuration="26.691615172s" podCreationTimestamp="2026-02-19 05:39:44 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.487176064 +0000 UTC m=+881.520498633" lastFinishedPulling="2026-02-19 05:40:08.850847985 +0000 UTC m=+904.884170594" observedRunningTime="2026-02-19 05:40:09.67636679 +0000 UTC m=+905.709689389" watchObservedRunningTime="2026-02-19 05:40:10.691615172 +0000 UTC m=+906.724937781" Feb 19 05:40:11 crc kubenswrapper[5012]: I0219 05:40:11.668421 5012 generic.go:334] "Generic (PLEG): container finished" podID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerID="aafcf8e55096f9e884517934f0d856f7a5e8f3dbd56d88ebfc6b91bb108af0a5" exitCode=0 Feb 19 05:40:11 crc kubenswrapper[5012]: I0219 05:40:11.668562 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxhkh" event={"ID":"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4","Type":"ContainerDied","Data":"aafcf8e55096f9e884517934f0d856f7a5e8f3dbd56d88ebfc6b91bb108af0a5"} Feb 19 05:40:12 crc kubenswrapper[5012]: I0219 05:40:12.679852 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" event={"ID":"0cc1b41b-fbf6-4d0c-b721-dcad09c03feb","Type":"ContainerStarted","Data":"c9032249bd4ffac9263705b295d828d39771a43fa220b53d832929f0dea49e6c"} Feb 19 05:40:12 crc kubenswrapper[5012]: I0219 05:40:12.681450 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" Feb 19 05:40:12 crc kubenswrapper[5012]: I0219 05:40:12.684659 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxhkh" event={"ID":"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4","Type":"ContainerStarted","Data":"4793da2abb43c2d8c767bb759c94d3903c33395aff9cd6c898118590dbb050fc"} Feb 19 05:40:12 crc kubenswrapper[5012]: I0219 05:40:12.700034 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" podStartSLOduration=2.675679572 podStartE2EDuration="29.700014212s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.143815605 +0000 UTC m=+881.177138174" lastFinishedPulling="2026-02-19 05:40:12.168150215 +0000 UTC m=+908.201472814" observedRunningTime="2026-02-19 05:40:12.695454331 +0000 UTC m=+908.728776940" watchObservedRunningTime="2026-02-19 05:40:12.700014212 +0000 UTC m=+908.733336781" Feb 19 05:40:12 crc kubenswrapper[5012]: I0219 05:40:12.725804 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cxhkh" podStartSLOduration=3.304952483 podStartE2EDuration="4.725783519s" podCreationTimestamp="2026-02-19 05:40:08 +0000 UTC" firstStartedPulling="2026-02-19 05:40:10.658677601 +0000 UTC m=+906.692000210" lastFinishedPulling="2026-02-19 05:40:12.079508637 +0000 UTC m=+908.112831246" observedRunningTime="2026-02-19 05:40:12.724098828 +0000 UTC m=+908.757421407" watchObservedRunningTime="2026-02-19 05:40:12.725783519 +0000 UTC m=+908.759106098" Feb 19 05:40:13 crc kubenswrapper[5012]: I0219 05:40:13.713832 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" Feb 19 05:40:13 crc kubenswrapper[5012]: I0219 05:40:13.751627 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" Feb 19 05:40:13 crc kubenswrapper[5012]: I0219 05:40:13.804621 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" Feb 19 05:40:13 crc kubenswrapper[5012]: I0219 05:40:13.880889 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" Feb 19 05:40:13 crc kubenswrapper[5012]: I0219 05:40:13.931252 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" Feb 19 05:40:13 crc kubenswrapper[5012]: I0219 05:40:13.944804 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.079746 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.162006 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.182798 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.184733 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.366786 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.431093 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.431171 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.620031 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.714268 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" event={"ID":"8629b5e4-e6a8-4c47-b76b-f58a26b42912","Type":"ContainerStarted","Data":"c20300be31342ad71df790048cb886f5ba7c0f16593302cd60220954b6454876"} Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.714325 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" event={"ID":"49d66f3b-e451-4b73-bc6a-4b854a71a4d6","Type":"ContainerStarted","Data":"5a2840d29bbe3cb005c6b77ceb5334b2faea6eb91af8578859736ff997e6d97e"} Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.714340 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" event={"ID":"c55ed223-371b-409a-bcb6-8ca6d2a3c908","Type":"ContainerStarted","Data":"055a483e1e891fbc0be6f7d229ca39dac71dd6bf1c89764fcdcb42a2de49fa81"} Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.715151 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.715496 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.715782 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.789314 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" podStartSLOduration=2.778634018 podStartE2EDuration="31.789275688s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.114343478 +0000 UTC m=+881.147666047" lastFinishedPulling="2026-02-19 05:40:14.124985148 +0000 UTC m=+910.158307717" observedRunningTime="2026-02-19 05:40:14.781039118 +0000 UTC m=+910.814361697" watchObservedRunningTime="2026-02-19 05:40:14.789275688 +0000 UTC m=+910.822598257" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.800828 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" podStartSLOduration=2.993711694 podStartE2EDuration="31.800814269s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.468870268 +0000 UTC m=+881.502192837" lastFinishedPulling="2026-02-19 05:40:14.275972843 +0000 UTC m=+910.309295412" observedRunningTime="2026-02-19 05:40:14.799395385 +0000 UTC m=+910.832717954" watchObservedRunningTime="2026-02-19 05:40:14.800814269 +0000 UTC m=+910.834136838" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.830456 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" podStartSLOduration=3.013503745 podStartE2EDuration="31.83043572s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.352955896 +0000 UTC m=+881.386278465" lastFinishedPulling="2026-02-19 05:40:14.169887871 +0000 UTC m=+910.203210440" observedRunningTime="2026-02-19 05:40:14.824478665 +0000 UTC m=+910.857801234" watchObservedRunningTime="2026-02-19 05:40:14.83043572 +0000 UTC m=+910.863758289" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.551686 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.565038 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.594222 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-llgkh" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.603375 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.725778 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" event={"ID":"dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43","Type":"ContainerStarted","Data":"aa7e878a41bf73aefcc53e99b65aa02626a2a4c40ad79a53aa40b9bbf411dc72"} Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.726871 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.750325 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" event={"ID":"8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be","Type":"ContainerStarted","Data":"49bf4dabe3e854716ac981e48b7349f157fd95eb991fd5c160d7fb183d62ffec"} Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.751028 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.758870 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" podStartSLOduration=2.664331566 podStartE2EDuration="32.75885415s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.067222011 +0000 UTC m=+881.100544580" lastFinishedPulling="2026-02-19 05:40:15.161744595 +0000 UTC m=+911.195067164" observedRunningTime="2026-02-19 05:40:15.754644098 +0000 UTC m=+911.787966667" watchObservedRunningTime="2026-02-19 05:40:15.75885415 +0000 UTC m=+911.792176719" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.782515 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" podStartSLOduration=2.367585193 podStartE2EDuration="32.782496246s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:44.706242554 +0000 UTC m=+880.739565123" lastFinishedPulling="2026-02-19 05:40:15.121153607 +0000 UTC m=+911.154476176" observedRunningTime="2026-02-19 05:40:15.777544675 +0000 UTC m=+911.810867254" watchObservedRunningTime="2026-02-19 05:40:15.782496246 +0000 UTC m=+911.815818815" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.857265 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.872134 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.953143 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-dfvzm" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.960773 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.060499 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.060575 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.063984 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.064648 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.149785 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx"] Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.362274 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-d8sxf" Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.370425 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.463621 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4"] Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.667863 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n"] Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.758572 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" event={"ID":"d1f124a8-4132-458d-a5a5-1839d31e7772","Type":"ContainerStarted","Data":"4e27172a3ea7c2b38d5773114946ce08511db334df5da49a09d7658be6255515"} Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.760050 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" event={"ID":"457202a7-ae9f-4d06-8690-d220e532b305","Type":"ContainerStarted","Data":"278cbbe025cde94400df481393ab560ec00034782b9295958627ef650894e9e8"} Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.760248 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.761288 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" event={"ID":"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0","Type":"ContainerStarted","Data":"3ea2b185436885db83e92745816adf82ef54b157c12c24b39bb123293d2c8228"} Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.762141 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" event={"ID":"996bfd61-486b-432d-9e09-d3a90ff9124c","Type":"ContainerStarted","Data":"3d2b640d2d5bedc755ffda4d83a902a8ebf56ccf7d6c44f4ff14b786469ac48c"} Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.777433 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" podStartSLOduration=3.011315282 podStartE2EDuration="33.777410824s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.37241 +0000 UTC m=+881.405732569" lastFinishedPulling="2026-02-19 05:40:16.138505522 +0000 UTC m=+912.171828111" observedRunningTime="2026-02-19 05:40:16.773188761 +0000 UTC m=+912.806511330" watchObservedRunningTime="2026-02-19 05:40:16.777410824 +0000 UTC m=+912.810733393" Feb 19 05:40:17 crc kubenswrapper[5012]: I0219 05:40:17.771410 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" event={"ID":"d1f124a8-4132-458d-a5a5-1839d31e7772","Type":"ContainerStarted","Data":"6c7b1312a1db5b69bd08ec2601f12660b0884fd2b593cdffa0a7c346e955ef18"} Feb 19 05:40:17 crc kubenswrapper[5012]: I0219 05:40:17.801017 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" podStartSLOduration=33.800988139 podStartE2EDuration="33.800988139s" podCreationTimestamp="2026-02-19 05:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:40:17.794104932 +0000 UTC m=+913.827427521" watchObservedRunningTime="2026-02-19 05:40:17.800988139 +0000 UTC m=+913.834310728" Feb 19 05:40:18 crc kubenswrapper[5012]: I0219 05:40:18.778272 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:40:19 crc kubenswrapper[5012]: I0219 05:40:19.179418 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:19 crc kubenswrapper[5012]: I0219 05:40:19.179503 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:19 crc kubenswrapper[5012]: I0219 05:40:19.245953 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:19 crc kubenswrapper[5012]: I0219 05:40:19.851482 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:19 crc kubenswrapper[5012]: I0219 05:40:19.905160 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cxhkh"] Feb 19 05:40:21 crc kubenswrapper[5012]: I0219 05:40:21.806682 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cxhkh" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerName="registry-server" containerID="cri-o://4793da2abb43c2d8c767bb759c94d3903c33395aff9cd6c898118590dbb050fc" gracePeriod=2 Feb 19 05:40:22 crc kubenswrapper[5012]: I0219 05:40:22.820524 5012 generic.go:334] "Generic (PLEG): container finished" podID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerID="4793da2abb43c2d8c767bb759c94d3903c33395aff9cd6c898118590dbb050fc" exitCode=0 Feb 19 05:40:22 crc kubenswrapper[5012]: I0219 05:40:22.820598 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxhkh" event={"ID":"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4","Type":"ContainerDied","Data":"4793da2abb43c2d8c767bb759c94d3903c33395aff9cd6c898118590dbb050fc"} Feb 19 05:40:23 crc kubenswrapper[5012]: I0219 05:40:23.763142 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" Feb 19 05:40:23 crc kubenswrapper[5012]: I0219 05:40:23.855608 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" Feb 19 05:40:23 crc kubenswrapper[5012]: I0219 05:40:23.878034 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" Feb 19 05:40:23 crc kubenswrapper[5012]: I0219 05:40:23.998705 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" Feb 19 05:40:24 crc kubenswrapper[5012]: I0219 05:40:24.116911 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" Feb 19 05:40:24 crc kubenswrapper[5012]: I0219 05:40:24.212875 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" Feb 19 05:40:24 crc kubenswrapper[5012]: I0219 05:40:24.260159 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" Feb 19 05:40:26 crc kubenswrapper[5012]: I0219 05:40:26.379425 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.306393 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.444376 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-catalog-content\") pod \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.444493 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcmmp\" (UniqueName: \"kubernetes.io/projected/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-kube-api-access-pcmmp\") pod \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.444542 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-utilities\") pod \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.445625 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-utilities" (OuterVolumeSpecName: "utilities") pod "de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" (UID: "de3ffb30-20cd-4e13-a51c-9d159b1ac3c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.451619 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-kube-api-access-pcmmp" (OuterVolumeSpecName: "kube-api-access-pcmmp") pod "de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" (UID: "de3ffb30-20cd-4e13-a51c-9d159b1ac3c4"). InnerVolumeSpecName "kube-api-access-pcmmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.546031 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcmmp\" (UniqueName: \"kubernetes.io/projected/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-kube-api-access-pcmmp\") on node \"crc\" DevicePath \"\"" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.546062 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.547903 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" (UID: "de3ffb30-20cd-4e13-a51c-9d159b1ac3c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.647114 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.880024 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" event={"ID":"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0","Type":"ContainerStarted","Data":"2cf7deccf3a6b020bae25dc205a1d46a1f232e81416d627b0ea595a0d080ee7c"} Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.880292 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.883690 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" event={"ID":"996bfd61-486b-432d-9e09-d3a90ff9124c","Type":"ContainerStarted","Data":"df4d02a809f04b1b9c7f9c2725bcd62797273e173a717aabd4998912df93e448"} Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.883900 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.900182 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxhkh" event={"ID":"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4","Type":"ContainerDied","Data":"a2c7c4a19968ec934eb2d82b6fd6c582a0e3ba562c22d8eb6cdb58faa6826649"} Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.900263 5012 scope.go:117] "RemoveContainer" containerID="4793da2abb43c2d8c767bb759c94d3903c33395aff9cd6c898118590dbb050fc" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.901168 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.928102 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" podStartSLOduration=34.097059755 podStartE2EDuration="44.928084194s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:40:16.476224663 +0000 UTC m=+912.509547272" lastFinishedPulling="2026-02-19 05:40:27.307249142 +0000 UTC m=+923.340571711" observedRunningTime="2026-02-19 05:40:27.919857214 +0000 UTC m=+923.953179823" watchObservedRunningTime="2026-02-19 05:40:27.928084194 +0000 UTC m=+923.961406763" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.937871 5012 scope.go:117] "RemoveContainer" containerID="aafcf8e55096f9e884517934f0d856f7a5e8f3dbd56d88ebfc6b91bb108af0a5" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.966932 5012 scope.go:117] "RemoveContainer" containerID="63155282e6f7bbbff3f0ae2f5fef6845f4cb91c8ffd7d7fe82a1ae65c034544e" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.967203 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" podStartSLOduration=33.819052108 podStartE2EDuration="44.967176456s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:40:16.174944319 +0000 UTC m=+912.208266888" lastFinishedPulling="2026-02-19 05:40:27.323068667 +0000 UTC m=+923.356391236" observedRunningTime="2026-02-19 05:40:27.967097554 +0000 UTC m=+924.000420133" watchObservedRunningTime="2026-02-19 05:40:27.967176456 +0000 UTC m=+924.000499055" Feb 19 05:40:28 crc kubenswrapper[5012]: I0219 05:40:28.001094 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cxhkh"] Feb 19 05:40:28 crc kubenswrapper[5012]: I0219 05:40:28.006576 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cxhkh"] Feb 19 05:40:28 crc kubenswrapper[5012]: I0219 05:40:28.711904 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" path="/var/lib/kubelet/pods/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4/volumes" Feb 19 05:40:35 crc kubenswrapper[5012]: I0219 05:40:35.610275 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:40:35 crc kubenswrapper[5012]: I0219 05:40:35.971523 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:40:44 crc kubenswrapper[5012]: I0219 05:40:44.431018 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:40:44 crc kubenswrapper[5012]: I0219 05:40:44.431727 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:40:44 crc kubenswrapper[5012]: I0219 05:40:44.431810 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:40:44 crc kubenswrapper[5012]: I0219 05:40:44.432762 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6b4f2485162f8c24d6693d845318234656e6a8c97d49d2e72f4427654fa319a"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:40:44 crc kubenswrapper[5012]: I0219 05:40:44.432884 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://f6b4f2485162f8c24d6693d845318234656e6a8c97d49d2e72f4427654fa319a" gracePeriod=600 Feb 19 05:40:45 crc kubenswrapper[5012]: I0219 05:40:45.067721 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="f6b4f2485162f8c24d6693d845318234656e6a8c97d49d2e72f4427654fa319a" exitCode=0 Feb 19 05:40:45 crc kubenswrapper[5012]: I0219 05:40:45.067875 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"f6b4f2485162f8c24d6693d845318234656e6a8c97d49d2e72f4427654fa319a"} Feb 19 05:40:45 crc kubenswrapper[5012]: I0219 05:40:45.068122 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"0209690f43a6b6283a91e933f5b897e5259f5fced0261c8b5238e804ce206915"} Feb 19 05:40:45 crc kubenswrapper[5012]: I0219 05:40:45.068157 5012 scope.go:117] "RemoveContainer" containerID="2fa30f17f6fec33303fdb3b3cb4c275384acd11d008a1c182ee7a051d5288089" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.696188 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-vwkhm"] Feb 19 05:40:55 crc kubenswrapper[5012]: E0219 05:40:55.697925 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerName="extract-content" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.697998 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerName="extract-content" Feb 19 05:40:55 crc kubenswrapper[5012]: E0219 05:40:55.698053 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerName="registry-server" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.698112 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerName="registry-server" Feb 19 05:40:55 crc kubenswrapper[5012]: E0219 05:40:55.698178 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerName="extract-utilities" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.698232 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerName="extract-utilities" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.698448 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerName="registry-server" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.699253 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.703236 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.703735 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.703988 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.704212 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-2v55g" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.718615 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-vwkhm"] Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.720655 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c96faf-42fc-437a-894d-e1c7f75b3511-config\") pod \"dnsmasq-dns-8468885bfc-vwkhm\" (UID: \"b2c96faf-42fc-437a-894d-e1c7f75b3511\") " pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.720723 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gshm7\" (UniqueName: \"kubernetes.io/projected/b2c96faf-42fc-437a-894d-e1c7f75b3511-kube-api-access-gshm7\") pod \"dnsmasq-dns-8468885bfc-vwkhm\" (UID: \"b2c96faf-42fc-437a-894d-e1c7f75b3511\") " pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.750942 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-td7mg"] Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.752034 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.753616 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.771382 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-td7mg"] Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.821495 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c96faf-42fc-437a-894d-e1c7f75b3511-config\") pod \"dnsmasq-dns-8468885bfc-vwkhm\" (UID: \"b2c96faf-42fc-437a-894d-e1c7f75b3511\") " pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.821883 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gshm7\" (UniqueName: \"kubernetes.io/projected/b2c96faf-42fc-437a-894d-e1c7f75b3511-kube-api-access-gshm7\") pod \"dnsmasq-dns-8468885bfc-vwkhm\" (UID: \"b2c96faf-42fc-437a-894d-e1c7f75b3511\") " pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.822322 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c96faf-42fc-437a-894d-e1c7f75b3511-config\") pod \"dnsmasq-dns-8468885bfc-vwkhm\" (UID: \"b2c96faf-42fc-437a-894d-e1c7f75b3511\") " pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.859481 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gshm7\" (UniqueName: \"kubernetes.io/projected/b2c96faf-42fc-437a-894d-e1c7f75b3511-kube-api-access-gshm7\") pod \"dnsmasq-dns-8468885bfc-vwkhm\" (UID: \"b2c96faf-42fc-437a-894d-e1c7f75b3511\") " pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.923346 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-dns-svc\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.923456 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-config\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.923484 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkzpw\" (UniqueName: \"kubernetes.io/projected/862b02ed-ae65-4348-8a31-81c1aff80089-kube-api-access-kkzpw\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.018004 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.024541 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-config\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.024582 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkzpw\" (UniqueName: \"kubernetes.io/projected/862b02ed-ae65-4348-8a31-81c1aff80089-kube-api-access-kkzpw\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.024628 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-dns-svc\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.025369 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-dns-svc\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.025876 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-config\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.043513 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkzpw\" (UniqueName: \"kubernetes.io/projected/862b02ed-ae65-4348-8a31-81c1aff80089-kube-api-access-kkzpw\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.070171 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.327188 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-td7mg"] Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.425419 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-vwkhm"] Feb 19 05:40:56 crc kubenswrapper[5012]: W0219 05:40:56.426656 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2c96faf_42fc_437a_894d_e1c7f75b3511.slice/crio-39eed3d52a31374f301920c96915c3a090730af8bc3bc407b5e57355ea12607b WatchSource:0}: Error finding container 39eed3d52a31374f301920c96915c3a090730af8bc3bc407b5e57355ea12607b: Status 404 returned error can't find the container with id 39eed3d52a31374f301920c96915c3a090730af8bc3bc407b5e57355ea12607b Feb 19 05:40:57 crc kubenswrapper[5012]: I0219 05:40:57.181504 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" event={"ID":"862b02ed-ae65-4348-8a31-81c1aff80089","Type":"ContainerStarted","Data":"4e46c1414bde8bffdfc3f7f7ffe96ab966534855c2a4f448c23174850398a8f0"} Feb 19 05:40:57 crc kubenswrapper[5012]: I0219 05:40:57.183499 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" event={"ID":"b2c96faf-42fc-437a-894d-e1c7f75b3511","Type":"ContainerStarted","Data":"39eed3d52a31374f301920c96915c3a090730af8bc3bc407b5e57355ea12607b"} Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.217554 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-td7mg"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.235519 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59ddbc48b7-4t5tr"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.237139 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.240561 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59ddbc48b7-4t5tr"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.283252 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gwbv\" (UniqueName: \"kubernetes.io/projected/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-kube-api-access-5gwbv\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.283531 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-dns-svc\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.283572 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-config\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.384739 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-config\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.384841 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gwbv\" (UniqueName: \"kubernetes.io/projected/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-kube-api-access-5gwbv\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.384880 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-dns-svc\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.385718 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-dns-svc\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.386241 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-config\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.407550 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gwbv\" (UniqueName: \"kubernetes.io/projected/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-kube-api-access-5gwbv\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.497116 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-vwkhm"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.525698 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65886c9755-l2845"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.526803 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.537460 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65886c9755-l2845"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.564328 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.587000 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvbph\" (UniqueName: \"kubernetes.io/projected/57e2c914-87bd-46f8-92c7-e87437f6758a-kube-api-access-tvbph\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.587057 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-dns-svc\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.587151 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-config\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.688287 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-dns-svc\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.688417 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-config\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.688442 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvbph\" (UniqueName: \"kubernetes.io/projected/57e2c914-87bd-46f8-92c7-e87437f6758a-kube-api-access-tvbph\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.689149 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-dns-svc\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.689393 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-config\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.718532 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvbph\" (UniqueName: \"kubernetes.io/projected/57e2c914-87bd-46f8-92c7-e87437f6758a-kube-api-access-tvbph\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.751916 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59ddbc48b7-4t5tr"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.790865 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f7d487d45-bvz4n"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.792036 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.812389 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f7d487d45-bvz4n"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.859511 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.892382 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-config\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.892885 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-dns-svc\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.893098 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcg6n\" (UniqueName: \"kubernetes.io/projected/79e01828-7818-4fe8-bd3f-8d39e9bf939c-kube-api-access-mcg6n\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.994293 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-config\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.994368 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-dns-svc\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.994416 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcg6n\" (UniqueName: \"kubernetes.io/projected/79e01828-7818-4fe8-bd3f-8d39e9bf939c-kube-api-access-mcg6n\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.995515 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-dns-svc\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.995521 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-config\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.011435 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcg6n\" (UniqueName: \"kubernetes.io/projected/79e01828-7818-4fe8-bd3f-8d39e9bf939c-kube-api-access-mcg6n\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.130969 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.257760 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59ddbc48b7-4t5tr"] Feb 19 05:41:00 crc kubenswrapper[5012]: W0219 05:41:00.271559 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a48a7cc_f140_4802_8dd4_2f4bb1c62aed.slice/crio-100f9d605f088bd90ac2e34324dc80fd0839075a52b9732932e0a541bd1a7b13 WatchSource:0}: Error finding container 100f9d605f088bd90ac2e34324dc80fd0839075a52b9732932e0a541bd1a7b13: Status 404 returned error can't find the container with id 100f9d605f088bd90ac2e34324dc80fd0839075a52b9732932e0a541bd1a7b13 Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.357496 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.368953 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.371119 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-default-user" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.374484 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-erlang-cookie" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.374660 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-config-data" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.374966 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-plugins-conf" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.375215 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-server-conf" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.375477 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-notifications-rabbitmq-svc" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.375573 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-server-dockercfg-wg2nx" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.379191 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402026 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402070 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402104 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402122 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402145 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402161 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c628866-f96d-4e7b-8846-7073c98dd389-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402187 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402202 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402224 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c628866-f96d-4e7b-8846-7073c98dd389-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402251 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402269 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpfwf\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-kube-api-access-qpfwf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.408912 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65886c9755-l2845"] Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503121 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503562 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503597 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503616 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503638 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503655 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c628866-f96d-4e7b-8846-7073c98dd389-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503676 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503693 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503714 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c628866-f96d-4e7b-8846-7073c98dd389-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503743 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503761 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpfwf\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-kube-api-access-qpfwf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.504652 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.504764 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.504765 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.505089 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.505242 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.505764 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.507970 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c628866-f96d-4e7b-8846-7073c98dd389-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.527633 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.529061 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.530474 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpfwf\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-kube-api-access-qpfwf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.531584 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c628866-f96d-4e7b-8846-7073c98dd389-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.554034 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.602233 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f7d487d45-bvz4n"] Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.655425 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.656836 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.662526 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.662666 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.662798 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.663391 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.663528 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.663544 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.663697 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-s7g27" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.664755 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.704464 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.832695 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-config-data\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.832783 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.832820 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.833026 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0095712-262e-4562-afac-0f2f4372224d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.833124 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.833239 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.833826 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.833915 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0095712-262e-4562-afac-0f2f4372224d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.833963 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8phq\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-kube-api-access-b8phq\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.834014 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.834121 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.907262 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.909845 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.921897 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.932406 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.932659 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.933253 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.933437 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.933617 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.934381 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hd6wk" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.936898 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.936935 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.936971 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.936996 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0095712-262e-4562-afac-0f2f4372224d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.937016 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8phq\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-kube-api-access-b8phq\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.937041 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.937063 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.937094 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-config-data\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.937114 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.937138 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.937159 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0095712-262e-4562-afac-0f2f4372224d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.938563 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.940073 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-config-data\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.940597 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.941155 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.942018 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.942638 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.946986 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0095712-262e-4562-afac-0f2f4372224d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.948672 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.948958 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0095712-262e-4562-afac-0f2f4372224d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.949272 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.952628 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.963696 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8phq\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-kube-api-access-b8phq\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.979838 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.987667 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040133 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040277 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a13d3004-2045-4daf-a925-7eccf541b1b4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040435 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040472 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040584 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a13d3004-2045-4daf-a925-7eccf541b1b4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040658 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040690 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040752 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040790 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zch8n\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-kube-api-access-zch8n\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040864 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040910 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.143694 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.143753 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.143813 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a13d3004-2045-4daf-a925-7eccf541b1b4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.143852 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.143880 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.143935 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.143972 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zch8n\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-kube-api-access-zch8n\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.144023 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.144059 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.144092 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.144110 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a13d3004-2045-4daf-a925-7eccf541b1b4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.145183 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.145910 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.145994 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.146197 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.146565 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.147065 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.149982 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a13d3004-2045-4daf-a925-7eccf541b1b4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.150242 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.152487 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.165223 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zch8n\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-kube-api-access-zch8n\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.174435 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a13d3004-2045-4daf-a925-7eccf541b1b4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.182059 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.193670 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.242256 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.276345 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65886c9755-l2845" event={"ID":"57e2c914-87bd-46f8-92c7-e87437f6758a","Type":"ContainerStarted","Data":"2ef7e60f2849b48568b2db26b7cbdccc8e5409326bef7865ef80fbf90f513b0e"} Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.279790 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"3c628866-f96d-4e7b-8846-7073c98dd389","Type":"ContainerStarted","Data":"ae645339d04f191bc4c70c73035c85e9ed7afd942642f9724125802b95690a47"} Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.284569 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" event={"ID":"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed","Type":"ContainerStarted","Data":"100f9d605f088bd90ac2e34324dc80fd0839075a52b9732932e0a541bd1a7b13"} Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.298070 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" event={"ID":"79e01828-7818-4fe8-bd3f-8d39e9bf939c","Type":"ContainerStarted","Data":"4cb41e822d4dbb1861f13461a8bcb5e410e5b409d268141b4a6e8e97a369da40"} Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.437410 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:41:01 crc kubenswrapper[5012]: W0219 05:41:01.448880 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0095712_262e_4562_afac_0f2f4372224d.slice/crio-a9ce4884d01424dd045dfa7d8118a6965b35bc2fd9ba564b1b28a67e56e88f01 WatchSource:0}: Error finding container a9ce4884d01424dd045dfa7d8118a6965b35bc2fd9ba564b1b28a67e56e88f01: Status 404 returned error can't find the container with id a9ce4884d01424dd045dfa7d8118a6965b35bc2fd9ba564b1b28a67e56e88f01 Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.826061 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.317883 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.320842 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0095712-262e-4562-afac-0f2f4372224d","Type":"ContainerStarted","Data":"a9ce4884d01424dd045dfa7d8118a6965b35bc2fd9ba564b1b28a67e56e88f01"} Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.321094 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.322914 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13d3004-2045-4daf-a925-7eccf541b1b4","Type":"ContainerStarted","Data":"856efb676cb6080920d1573427ad1823ab21a0fe78f76dfb2cca62d969151964"} Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.324863 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.325354 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.325400 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-rjhwx" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.325595 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.330617 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.354041 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.470544 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-kolla-config\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.470833 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd0c672-e258-4feb-8bbd-26135f92f7fb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.470867 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.470914 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd0c672-e258-4feb-8bbd-26135f92f7fb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.470956 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6w2k\" (UniqueName: \"kubernetes.io/projected/1fd0c672-e258-4feb-8bbd-26135f92f7fb-kube-api-access-l6w2k\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.471008 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.471038 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fd0c672-e258-4feb-8bbd-26135f92f7fb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.471469 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-config-data-default\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572214 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd0c672-e258-4feb-8bbd-26135f92f7fb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572257 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6w2k\" (UniqueName: \"kubernetes.io/projected/1fd0c672-e258-4feb-8bbd-26135f92f7fb-kube-api-access-l6w2k\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572288 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572336 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fd0c672-e258-4feb-8bbd-26135f92f7fb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572373 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-config-data-default\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572412 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-kolla-config\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572427 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd0c672-e258-4feb-8bbd-26135f92f7fb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572452 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572984 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.573234 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fd0c672-e258-4feb-8bbd-26135f92f7fb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.573837 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-kolla-config\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.575149 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.578253 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-config-data-default\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.586988 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd0c672-e258-4feb-8bbd-26135f92f7fb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.591765 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6w2k\" (UniqueName: \"kubernetes.io/projected/1fd0c672-e258-4feb-8bbd-26135f92f7fb-kube-api-access-l6w2k\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.603276 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.603337 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd0c672-e258-4feb-8bbd-26135f92f7fb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.659110 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.299619 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 05:41:03 crc kubenswrapper[5012]: W0219 05:41:03.327725 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fd0c672_e258_4feb_8bbd_26135f92f7fb.slice/crio-47f1110042288f6f94d44f934829dc5fa532b8dd81bbb097f27affb84677eafe WatchSource:0}: Error finding container 47f1110042288f6f94d44f934829dc5fa532b8dd81bbb097f27affb84677eafe: Status 404 returned error can't find the container with id 47f1110042288f6f94d44f934829dc5fa532b8dd81bbb097f27affb84677eafe Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.739906 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.742994 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.745622 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-pxjvs" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.749709 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.750141 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.751808 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.752814 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.921696 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/04466d10-2177-4361-bd86-333c046b9e52-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.921752 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.921780 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04466d10-2177-4361-bd86-333c046b9e52-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.921820 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.921872 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.921902 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.921924 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwmq8\" (UniqueName: \"kubernetes.io/projected/04466d10-2177-4361-bd86-333c046b9e52-kube-api-access-dwmq8\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.921956 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/04466d10-2177-4361-bd86-333c046b9e52-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.023717 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.023792 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.023827 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwmq8\" (UniqueName: \"kubernetes.io/projected/04466d10-2177-4361-bd86-333c046b9e52-kube-api-access-dwmq8\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.023867 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/04466d10-2177-4361-bd86-333c046b9e52-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.023919 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/04466d10-2177-4361-bd86-333c046b9e52-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.023946 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.023976 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04466d10-2177-4361-bd86-333c046b9e52-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.024020 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.024357 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/04466d10-2177-4361-bd86-333c046b9e52-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.024464 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.025739 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.025883 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.026743 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.046101 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04466d10-2177-4361-bd86-333c046b9e52-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.048515 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/04466d10-2177-4361-bd86-333c046b9e52-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.051173 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwmq8\" (UniqueName: \"kubernetes.io/projected/04466d10-2177-4361-bd86-333c046b9e52-kube-api-access-dwmq8\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.107129 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.283889 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.284805 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.288057 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.288777 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-lm7dt" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.288792 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.319658 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.384701 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1fd0c672-e258-4feb-8bbd-26135f92f7fb","Type":"ContainerStarted","Data":"47f1110042288f6f94d44f934829dc5fa532b8dd81bbb097f27affb84677eafe"} Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.399283 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.430749 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-combined-ca-bundle\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.430970 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-config-data\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.431059 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jllr6\" (UniqueName: \"kubernetes.io/projected/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-kube-api-access-jllr6\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.431083 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-memcached-tls-certs\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.431143 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-kolla-config\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.532376 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jllr6\" (UniqueName: \"kubernetes.io/projected/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-kube-api-access-jllr6\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.532429 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-memcached-tls-certs\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.532467 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-kolla-config\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.532567 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-combined-ca-bundle\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.532587 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-config-data\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.533436 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-config-data\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.534634 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-kolla-config\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.539632 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-combined-ca-bundle\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.550688 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-memcached-tls-certs\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.551099 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jllr6\" (UniqueName: \"kubernetes.io/projected/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-kube-api-access-jllr6\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.617723 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.962110 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 05:41:04 crc kubenswrapper[5012]: W0219 05:41:04.988664 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38a4a51f_c380_48fc_8f0e_cdd1ea09fa53.slice/crio-74ed3816d14374b8abf3c72c194cbbbe488bdd77110f3429b53676631c9f5fa3 WatchSource:0}: Error finding container 74ed3816d14374b8abf3c72c194cbbbe488bdd77110f3429b53676631c9f5fa3: Status 404 returned error can't find the container with id 74ed3816d14374b8abf3c72c194cbbbe488bdd77110f3429b53676631c9f5fa3 Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.997202 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 05:41:05 crc kubenswrapper[5012]: I0219 05:41:05.399224 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"04466d10-2177-4361-bd86-333c046b9e52","Type":"ContainerStarted","Data":"4b830be35220905d2aad786468bb9e6dc43157df5e48cc3e6079dcc819616218"} Feb 19 05:41:05 crc kubenswrapper[5012]: I0219 05:41:05.401538 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53","Type":"ContainerStarted","Data":"74ed3816d14374b8abf3c72c194cbbbe488bdd77110f3429b53676631c9f5fa3"} Feb 19 05:41:06 crc kubenswrapper[5012]: I0219 05:41:06.549690 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:41:06 crc kubenswrapper[5012]: I0219 05:41:06.551523 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 05:41:06 crc kubenswrapper[5012]: I0219 05:41:06.557516 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:41:06 crc kubenswrapper[5012]: I0219 05:41:06.559840 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-sv9x8" Feb 19 05:41:06 crc kubenswrapper[5012]: I0219 05:41:06.674374 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcbl4\" (UniqueName: \"kubernetes.io/projected/6c04ef21-3d68-44e8-ba69-164f3b32b7a0-kube-api-access-jcbl4\") pod \"kube-state-metrics-0\" (UID: \"6c04ef21-3d68-44e8-ba69-164f3b32b7a0\") " pod="openstack/kube-state-metrics-0" Feb 19 05:41:06 crc kubenswrapper[5012]: I0219 05:41:06.775775 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcbl4\" (UniqueName: \"kubernetes.io/projected/6c04ef21-3d68-44e8-ba69-164f3b32b7a0-kube-api-access-jcbl4\") pod \"kube-state-metrics-0\" (UID: \"6c04ef21-3d68-44e8-ba69-164f3b32b7a0\") " pod="openstack/kube-state-metrics-0" Feb 19 05:41:06 crc kubenswrapper[5012]: I0219 05:41:06.815023 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcbl4\" (UniqueName: \"kubernetes.io/projected/6c04ef21-3d68-44e8-ba69-164f3b32b7a0-kube-api-access-jcbl4\") pod \"kube-state-metrics-0\" (UID: \"6c04ef21-3d68-44e8-ba69-164f3b32b7a0\") " pod="openstack/kube-state-metrics-0" Feb 19 05:41:06 crc kubenswrapper[5012]: I0219 05:41:06.875124 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.467604 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:41:07 crc kubenswrapper[5012]: W0219 05:41:07.496950 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c04ef21_3d68_44e8_ba69_164f3b32b7a0.slice/crio-1d78bdb8cd099c1e00c91080ebe4740fa66e2f4e7fc08f7ed987fe609d80ac23 WatchSource:0}: Error finding container 1d78bdb8cd099c1e00c91080ebe4740fa66e2f4e7fc08f7ed987fe609d80ac23: Status 404 returned error can't find the container with id 1d78bdb8cd099c1e00c91080ebe4740fa66e2f4e7fc08f7ed987fe609d80ac23 Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.638420 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.640438 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.643800 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.647482 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.648842 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.650760 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7bqtw" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.650932 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.651057 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.651189 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.651330 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.669611 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800103 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800160 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800195 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800220 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800236 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800262 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e31edbd-c20b-420d-8888-cafb392410cd-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800282 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800544 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800706 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s694\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-kube-api-access-7s694\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800805 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902331 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e31edbd-c20b-420d-8888-cafb392410cd-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902382 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902432 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902476 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s694\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-kube-api-access-7s694\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902498 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902530 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902568 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902603 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902627 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902643 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.904211 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.904241 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.904553 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.906475 5012 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.906506 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/80266977aa18e8991458f1f7d5520b709fb21586520e915bbacb4bc2380e455f/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.909512 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.909575 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.910083 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.911622 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e31edbd-c20b-420d-8888-cafb392410cd-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.926231 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s694\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-kube-api-access-7s694\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.933488 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.939080 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.976914 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:08 crc kubenswrapper[5012]: I0219 05:41:08.456984 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:41:08 crc kubenswrapper[5012]: I0219 05:41:08.586656 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6c04ef21-3d68-44e8-ba69-164f3b32b7a0","Type":"ContainerStarted","Data":"1d78bdb8cd099c1e00c91080ebe4740fa66e2f4e7fc08f7ed987fe609d80ac23"} Feb 19 05:41:08 crc kubenswrapper[5012]: W0219 05:41:08.618179 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e31edbd_c20b_420d_8888_cafb392410cd.slice/crio-91314d71567782400d0673184328bab50c18185869b638d4949c49d81c11f6bb WatchSource:0}: Error finding container 91314d71567782400d0673184328bab50c18185869b638d4949c49d81c11f6bb: Status 404 returned error can't find the container with id 91314d71567782400d0673184328bab50c18185869b638d4949c49d81c11f6bb Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.552681 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cr94m"] Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.553982 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.568732 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.568991 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pmbzw" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.569006 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-7qdpg"] Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.569274 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.574440 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.587535 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cr94m"] Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.587583 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7qdpg"] Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.645289 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16fbaba1-bd32-4121-8743-99422db74180-scripts\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.645424 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-etc-ovs\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.645452 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-run\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.645498 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-combined-ca-bundle\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646083 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-log-ovn\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646119 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-lib\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646164 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2crw\" (UniqueName: \"kubernetes.io/projected/16fbaba1-bd32-4121-8743-99422db74180-kube-api-access-f2crw\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646186 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rjj8\" (UniqueName: \"kubernetes.io/projected/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-kube-api-access-5rjj8\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646202 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-run-ovn\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646251 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-log\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646575 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-scripts\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646615 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-run\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646635 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-ovn-controller-tls-certs\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.668716 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerStarted","Data":"91314d71567782400d0673184328bab50c18185869b638d4949c49d81c11f6bb"} Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749130 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2crw\" (UniqueName: \"kubernetes.io/projected/16fbaba1-bd32-4121-8743-99422db74180-kube-api-access-f2crw\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749184 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rjj8\" (UniqueName: \"kubernetes.io/projected/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-kube-api-access-5rjj8\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749202 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-run-ovn\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749239 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-log\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749267 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-scripts\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749295 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-run\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749331 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-ovn-controller-tls-certs\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749359 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16fbaba1-bd32-4121-8743-99422db74180-scripts\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749416 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-etc-ovs\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749435 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-run\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749452 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-combined-ca-bundle\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749470 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-log-ovn\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749490 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-lib\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.750054 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-lib\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.750653 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-run-ovn\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.750767 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-log\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.751463 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-etc-ovs\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.751505 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-log-ovn\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.751656 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-run\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.751698 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-run\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.756050 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16fbaba1-bd32-4121-8743-99422db74180-scripts\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.758801 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-ovn-controller-tls-certs\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.759701 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-scripts\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.781968 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rjj8\" (UniqueName: \"kubernetes.io/projected/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-kube-api-access-5rjj8\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.784258 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2crw\" (UniqueName: \"kubernetes.io/projected/16fbaba1-bd32-4121-8743-99422db74180-kube-api-access-f2crw\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.785897 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-combined-ca-bundle\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.855503 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.858620 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.863448 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-4dbr9" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.863507 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.863763 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.863910 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.865730 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.887487 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.951056 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.959246 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.959335 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.959369 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9e6735-4159-4248-a8f5-6714d386901a-config\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.959395 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.959423 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5a9e6735-4159-4248-a8f5-6714d386901a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.959445 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a9e6735-4159-4248-a8f5-6714d386901a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.959462 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57n8f\" (UniqueName: \"kubernetes.io/projected/5a9e6735-4159-4248-a8f5-6714d386901a-kube-api-access-57n8f\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.959487 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.972595 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.060836 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5a9e6735-4159-4248-a8f5-6714d386901a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.060892 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a9e6735-4159-4248-a8f5-6714d386901a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.060925 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57n8f\" (UniqueName: \"kubernetes.io/projected/5a9e6735-4159-4248-a8f5-6714d386901a-kube-api-access-57n8f\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.060956 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.061015 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.061054 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.061087 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9e6735-4159-4248-a8f5-6714d386901a-config\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.061108 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.062264 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.062513 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5a9e6735-4159-4248-a8f5-6714d386901a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.062661 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9e6735-4159-4248-a8f5-6714d386901a-config\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.062789 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a9e6735-4159-4248-a8f5-6714d386901a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.067082 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.070733 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.082045 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.083160 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57n8f\" (UniqueName: \"kubernetes.io/projected/5a9e6735-4159-4248-a8f5-6714d386901a-kube-api-access-57n8f\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.083996 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.183225 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.437514 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.439483 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.444268 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.444688 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-cpr8c" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.444945 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.445724 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.452041 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.538393 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wq4x\" (UniqueName: \"kubernetes.io/projected/00790bd0-5fbb-4927-8361-085c9691c171-kube-api-access-9wq4x\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.538440 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.538513 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00790bd0-5fbb-4927-8361-085c9691c171-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.538539 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.538590 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/00790bd0-5fbb-4927-8361-085c9691c171-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.538608 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00790bd0-5fbb-4927-8361-085c9691c171-config\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.538630 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.538658 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.639901 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.639991 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/00790bd0-5fbb-4927-8361-085c9691c171-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.640029 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00790bd0-5fbb-4927-8361-085c9691c171-config\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.640049 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.640076 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.640097 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wq4x\" (UniqueName: \"kubernetes.io/projected/00790bd0-5fbb-4927-8361-085c9691c171-kube-api-access-9wq4x\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.640124 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.640190 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00790bd0-5fbb-4927-8361-085c9691c171-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.641523 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00790bd0-5fbb-4927-8361-085c9691c171-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.642684 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/00790bd0-5fbb-4927-8361-085c9691c171-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.642819 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00790bd0-5fbb-4927-8361-085c9691c171-config\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.642983 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.650234 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.652824 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.655526 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.660469 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wq4x\" (UniqueName: \"kubernetes.io/projected/00790bd0-5fbb-4927-8361-085c9691c171-kube-api-access-9wq4x\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.666787 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.765217 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:21 crc kubenswrapper[5012]: I0219 05:41:21.878526 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 05:41:25 crc kubenswrapper[5012]: E0219 05:41:25.156443 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Feb 19 05:41:25 crc kubenswrapper[5012]: E0219 05:41:25.156896 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Feb 19 05:41:25 crc kubenswrapper[5012]: E0219 05:41:25.157024 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jcbl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(6c04ef21-3d68-44e8-ba69-164f3b32b7a0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 05:41:25 crc kubenswrapper[5012]: E0219 05:41:25.158822 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="6c04ef21-3d68-44e8-ba69-164f3b32b7a0" Feb 19 05:41:25 crc kubenswrapper[5012]: I0219 05:41:25.698543 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7qdpg"] Feb 19 05:41:25 crc kubenswrapper[5012]: I0219 05:41:25.839059 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5a9e6735-4159-4248-a8f5-6714d386901a","Type":"ContainerStarted","Data":"2c6a79ea5c6119196f3da355e77e22d680a3eca004fa8ac8fee6d4e710f0e13e"} Feb 19 05:41:25 crc kubenswrapper[5012]: E0219 05:41:25.843495 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb\\\"\"" pod="openstack/kube-state-metrics-0" podUID="6c04ef21-3d68-44e8-ba69-164f3b32b7a0" Feb 19 05:41:29 crc kubenswrapper[5012]: I0219 05:41:29.875743 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7qdpg" event={"ID":"16fbaba1-bd32-4121-8743-99422db74180","Type":"ContainerStarted","Data":"39a6bd400d41740054527b3f52c850bca672c6e54784d97eb1a4cd34a485c239"} Feb 19 05:41:30 crc kubenswrapper[5012]: I0219 05:41:30.069762 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cr94m"] Feb 19 05:41:30 crc kubenswrapper[5012]: I0219 05:41:30.212199 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 05:41:31 crc kubenswrapper[5012]: W0219 05:41:31.860227 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2c9ac17_43ef_4ccb_83b1_e20ee03289de.slice/crio-235c8db2831222c53397bad4dd52402a9fb9e7fae73bb29dbc9edb0fdc48bdec WatchSource:0}: Error finding container 235c8db2831222c53397bad4dd52402a9fb9e7fae73bb29dbc9edb0fdc48bdec: Status 404 returned error can't find the container with id 235c8db2831222c53397bad4dd52402a9fb9e7fae73bb29dbc9edb0fdc48bdec Feb 19 05:41:31 crc kubenswrapper[5012]: I0219 05:41:31.898781 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cr94m" event={"ID":"e2c9ac17-43ef-4ccb-83b1-e20ee03289de","Type":"ContainerStarted","Data":"235c8db2831222c53397bad4dd52402a9fb9e7fae73bb29dbc9edb0fdc48bdec"} Feb 19 05:41:35 crc kubenswrapper[5012]: E0219 05:41:35.173015 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Feb 19 05:41:35 crc kubenswrapper[5012]: E0219 05:41:35.174024 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Feb 19 05:41:35 crc kubenswrapper[5012]: E0219 05:41:35.174222 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gshm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8468885bfc-vwkhm_openstack(b2c96faf-42fc-437a-894d-e1c7f75b3511): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:41:35 crc kubenswrapper[5012]: E0219 05:41:35.178433 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" podUID="b2c96faf-42fc-437a-894d-e1c7f75b3511" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.436866 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.437199 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.437431 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpfwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod notifications-rabbitmq-server-0_openstack(3c628866-f96d-4e7b-8846-7073c98dd389): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.438555 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/notifications-rabbitmq-server-0" podUID="3c628866-f96d-4e7b-8846-7073c98dd389" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.479895 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.479983 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.480185 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n684h65fh56h6fh87h85h57h76h5b7h94hffh649hfbh8ch5bch56fh5c5hbh86hf9h99h5dch95h66hd5h555h566h646h546h79h9dh55dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gwbv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-59ddbc48b7-4t5tr_openstack(4a48a7cc-f140-4802-8dd4-2f4bb1c62aed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.481622 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" podUID="4a48a7cc-f140-4802-8dd4-2f4bb1c62aed" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.510142 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.510192 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.510345 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kkzpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-545d49fd5c-td7mg_openstack(862b02ed-ae65-4348-8a31-81c1aff80089): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.511536 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" podUID="862b02ed-ae65-4348-8a31-81c1aff80089" Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.517283 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.546690 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.546746 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.546865 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n658h5c5h88h68dhb6h57dhd4h697hb8h8fh74hb7h54fh54dh548h7h55dhb8h9fh55dh688h5bbh5d5h675h669hb7h67hbbhffh668h5c7hc5q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvbph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-65886c9755-l2845_openstack(57e2c914-87bd-46f8-92c7-e87437f6758a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.548068 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-65886c9755-l2845" podUID="57e2c914-87bd-46f8-92c7-e87437f6758a" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.614995 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.615071 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.615214 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zch8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(a13d3004-2045-4daf-a925-7eccf541b1b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.617227 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.684090 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gshm7\" (UniqueName: \"kubernetes.io/projected/b2c96faf-42fc-437a-894d-e1c7f75b3511-kube-api-access-gshm7\") pod \"b2c96faf-42fc-437a-894d-e1c7f75b3511\" (UID: \"b2c96faf-42fc-437a-894d-e1c7f75b3511\") " Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.684143 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c96faf-42fc-437a-894d-e1c7f75b3511-config\") pod \"b2c96faf-42fc-437a-894d-e1c7f75b3511\" (UID: \"b2c96faf-42fc-437a-894d-e1c7f75b3511\") " Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.684931 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c96faf-42fc-437a-894d-e1c7f75b3511-config" (OuterVolumeSpecName: "config") pod "b2c96faf-42fc-437a-894d-e1c7f75b3511" (UID: "b2c96faf-42fc-437a-894d-e1c7f75b3511"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.728436 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c96faf-42fc-437a-894d-e1c7f75b3511-kube-api-access-gshm7" (OuterVolumeSpecName: "kube-api-access-gshm7") pod "b2c96faf-42fc-437a-894d-e1c7f75b3511" (UID: "b2c96faf-42fc-437a-894d-e1c7f75b3511"). InnerVolumeSpecName "kube-api-access-gshm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.785512 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gshm7\" (UniqueName: \"kubernetes.io/projected/b2c96faf-42fc-437a-894d-e1c7f75b3511-kube-api-access-gshm7\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.785539 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c96faf-42fc-437a-894d-e1c7f75b3511-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.947520 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"00790bd0-5fbb-4927-8361-085c9691c171","Type":"ContainerStarted","Data":"b445ec4833440a541a722f31831dc1bac99cd73ab1cbec06d26f61e4470fd929"} Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.949655 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" event={"ID":"b2c96faf-42fc-437a-894d-e1c7f75b3511","Type":"ContainerDied","Data":"39eed3d52a31374f301920c96915c3a090730af8bc3bc407b5e57355ea12607b"} Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.949723 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.952058 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.953589 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current\\\"\"" pod="openstack/dnsmasq-dns-65886c9755-l2845" podUID="57e2c914-87bd-46f8-92c7-e87437f6758a" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.954051 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current\\\"\"" pod="openstack/notifications-rabbitmq-server-0" podUID="3c628866-f96d-4e7b-8846-7073c98dd389" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.046114 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-vwkhm"] Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.064594 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-vwkhm"] Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.362644 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.365026 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.502185 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkzpw\" (UniqueName: \"kubernetes.io/projected/862b02ed-ae65-4348-8a31-81c1aff80089-kube-api-access-kkzpw\") pod \"862b02ed-ae65-4348-8a31-81c1aff80089\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.502692 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gwbv\" (UniqueName: \"kubernetes.io/projected/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-kube-api-access-5gwbv\") pod \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.502722 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-dns-svc\") pod \"862b02ed-ae65-4348-8a31-81c1aff80089\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.502757 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-dns-svc\") pod \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.502819 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-config\") pod \"862b02ed-ae65-4348-8a31-81c1aff80089\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.502857 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-config\") pod \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.503568 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a48a7cc-f140-4802-8dd4-2f4bb1c62aed" (UID: "4a48a7cc-f140-4802-8dd4-2f4bb1c62aed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.503869 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "862b02ed-ae65-4348-8a31-81c1aff80089" (UID: "862b02ed-ae65-4348-8a31-81c1aff80089"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.504150 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-config" (OuterVolumeSpecName: "config") pod "862b02ed-ae65-4348-8a31-81c1aff80089" (UID: "862b02ed-ae65-4348-8a31-81c1aff80089"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.505583 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-kube-api-access-5gwbv" (OuterVolumeSpecName: "kube-api-access-5gwbv") pod "4a48a7cc-f140-4802-8dd4-2f4bb1c62aed" (UID: "4a48a7cc-f140-4802-8dd4-2f4bb1c62aed"). InnerVolumeSpecName "kube-api-access-5gwbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.505581 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-config" (OuterVolumeSpecName: "config") pod "4a48a7cc-f140-4802-8dd4-2f4bb1c62aed" (UID: "4a48a7cc-f140-4802-8dd4-2f4bb1c62aed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.506093 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862b02ed-ae65-4348-8a31-81c1aff80089-kube-api-access-kkzpw" (OuterVolumeSpecName: "kube-api-access-kkzpw") pod "862b02ed-ae65-4348-8a31-81c1aff80089" (UID: "862b02ed-ae65-4348-8a31-81c1aff80089"). InnerVolumeSpecName "kube-api-access-kkzpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.603985 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.604092 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.604175 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkzpw\" (UniqueName: \"kubernetes.io/projected/862b02ed-ae65-4348-8a31-81c1aff80089-kube-api-access-kkzpw\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.604259 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gwbv\" (UniqueName: \"kubernetes.io/projected/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-kube-api-access-5gwbv\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.604340 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.604405 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.958882 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53","Type":"ContainerStarted","Data":"d71e777540c60b9d720ba610439fceea883ea752400b2d1d8790461bf48312f2"} Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.960026 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.961821 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1fd0c672-e258-4feb-8bbd-26135f92f7fb","Type":"ContainerStarted","Data":"a8e754bcf301635d8dc3f5a9e704295059c792c19bbabbb8a572e39943ecb2ef"} Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.963889 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" event={"ID":"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed","Type":"ContainerDied","Data":"100f9d605f088bd90ac2e34324dc80fd0839075a52b9732932e0a541bd1a7b13"} Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.963985 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.970524 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"04466d10-2177-4361-bd86-333c046b9e52","Type":"ContainerStarted","Data":"819d4936f72ac1e78543d7aa12ff4e73a784d1c30ba80b58ffdf950f2bf4e356"} Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.971999 5012 generic.go:334] "Generic (PLEG): container finished" podID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerID="110e2fb48dbdbaaee96e12fd6145e56296c9e6c4ec3ed95da58954f821868b52" exitCode=0 Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.972077 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" event={"ID":"79e01828-7818-4fe8-bd3f-8d39e9bf939c","Type":"ContainerDied","Data":"110e2fb48dbdbaaee96e12fd6145e56296c9e6c4ec3ed95da58954f821868b52"} Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.972964 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" event={"ID":"862b02ed-ae65-4348-8a31-81c1aff80089","Type":"ContainerDied","Data":"4e46c1414bde8bffdfc3f7f7ffe96ab966534855c2a4f448c23174850398a8f0"} Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.972990 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.986955 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.589205015 podStartE2EDuration="33.986907663s" podCreationTimestamp="2026-02-19 05:41:04 +0000 UTC" firstStartedPulling="2026-02-19 05:41:04.994629623 +0000 UTC m=+961.027952192" lastFinishedPulling="2026-02-19 05:41:37.392332271 +0000 UTC m=+993.425654840" observedRunningTime="2026-02-19 05:41:37.984752429 +0000 UTC m=+994.018074998" watchObservedRunningTime="2026-02-19 05:41:37.986907663 +0000 UTC m=+994.020230252" Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.122555 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59ddbc48b7-4t5tr"] Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.130443 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59ddbc48b7-4t5tr"] Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.148640 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-td7mg"] Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.159592 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-td7mg"] Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.723016 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a48a7cc-f140-4802-8dd4-2f4bb1c62aed" path="/var/lib/kubelet/pods/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed/volumes" Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.724025 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="862b02ed-ae65-4348-8a31-81c1aff80089" path="/var/lib/kubelet/pods/862b02ed-ae65-4348-8a31-81c1aff80089/volumes" Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.724536 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c96faf-42fc-437a-894d-e1c7f75b3511" path="/var/lib/kubelet/pods/b2c96faf-42fc-437a-894d-e1c7f75b3511/volumes" Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.982849 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0095712-262e-4562-afac-0f2f4372224d","Type":"ContainerStarted","Data":"1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f"} Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.985796 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" event={"ID":"79e01828-7818-4fe8-bd3f-8d39e9bf939c","Type":"ContainerStarted","Data":"8f0dc1aa57e08411f9d0f619e65ecab31defd41e57bdd287ce850d95e5dc2423"} Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.986671 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:41:39 crc kubenswrapper[5012]: I0219 05:41:39.031139 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" podStartSLOduration=3.217681985 podStartE2EDuration="40.031120634s" podCreationTimestamp="2026-02-19 05:40:59 +0000 UTC" firstStartedPulling="2026-02-19 05:41:00.645526217 +0000 UTC m=+956.678848786" lastFinishedPulling="2026-02-19 05:41:37.458964866 +0000 UTC m=+993.492287435" observedRunningTime="2026-02-19 05:41:39.01861027 +0000 UTC m=+995.051932839" watchObservedRunningTime="2026-02-19 05:41:39.031120634 +0000 UTC m=+995.064443203" Feb 19 05:41:41 crc kubenswrapper[5012]: I0219 05:41:41.005538 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerStarted","Data":"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6"} Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.013929 5012 generic.go:334] "Generic (PLEG): container finished" podID="16fbaba1-bd32-4121-8743-99422db74180" containerID="06ec94d8cf824c0dd76739679c91a4936003c296b953898581c57a6b59543f08" exitCode=0 Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.014223 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7qdpg" event={"ID":"16fbaba1-bd32-4121-8743-99422db74180","Type":"ContainerDied","Data":"06ec94d8cf824c0dd76739679c91a4936003c296b953898581c57a6b59543f08"} Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.016442 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6c04ef21-3d68-44e8-ba69-164f3b32b7a0","Type":"ContainerStarted","Data":"caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73"} Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.017253 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.020622 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"00790bd0-5fbb-4927-8361-085c9691c171","Type":"ContainerStarted","Data":"9af1ad265e37ae3ab34247dd530f54788d2aeafa36169002f4f1ddfe2730e33d"} Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.025149 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cr94m" event={"ID":"e2c9ac17-43ef-4ccb-83b1-e20ee03289de","Type":"ContainerStarted","Data":"2bf4f2cf3692bc11c471b18877b32f1b54456d1cebc4847f44252fd08d84746f"} Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.025729 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-cr94m" Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.027383 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5a9e6735-4159-4248-a8f5-6714d386901a","Type":"ContainerStarted","Data":"94bc8be702ca89d9fa7574a6fb62d07e6b869f19201ca8f05480725db70a91a2"} Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.053205 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cr94m" podStartSLOduration=24.079214195 podStartE2EDuration="33.053187847s" podCreationTimestamp="2026-02-19 05:41:09 +0000 UTC" firstStartedPulling="2026-02-19 05:41:31.865638038 +0000 UTC m=+987.898960647" lastFinishedPulling="2026-02-19 05:41:40.83961173 +0000 UTC m=+996.872934299" observedRunningTime="2026-02-19 05:41:42.052063529 +0000 UTC m=+998.085386118" watchObservedRunningTime="2026-02-19 05:41:42.053187847 +0000 UTC m=+998.086510416" Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.073141 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.404275761 podStartE2EDuration="36.073127308s" podCreationTimestamp="2026-02-19 05:41:06 +0000 UTC" firstStartedPulling="2026-02-19 05:41:07.502183462 +0000 UTC m=+963.535506021" lastFinishedPulling="2026-02-19 05:41:41.171034999 +0000 UTC m=+997.204357568" observedRunningTime="2026-02-19 05:41:42.07081616 +0000 UTC m=+998.104138739" watchObservedRunningTime="2026-02-19 05:41:42.073127308 +0000 UTC m=+998.106449867" Feb 19 05:41:43 crc kubenswrapper[5012]: I0219 05:41:43.044422 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7qdpg" event={"ID":"16fbaba1-bd32-4121-8743-99422db74180","Type":"ContainerStarted","Data":"941d1a59bc70fb616fd4c55a311743b95c05c720a0509a2462c3f859fa196b57"} Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.053467 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5a9e6735-4159-4248-a8f5-6714d386901a","Type":"ContainerStarted","Data":"9175750f7b363090ca147be00bf16c86f82d6d1ee52b66797a25314d9cd24fc3"} Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.056582 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7qdpg" event={"ID":"16fbaba1-bd32-4121-8743-99422db74180","Type":"ContainerStarted","Data":"0023846d8083910d6ad0b807959a9326119e3f0edf0873e00b02493f7a10978f"} Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.056821 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.056867 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.059071 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"00790bd0-5fbb-4927-8361-085c9691c171","Type":"ContainerStarted","Data":"1e90b548675c0826432d31830abdf314c6cab338800b3959a0399a4191b6c30a"} Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.090536 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=18.358913966 podStartE2EDuration="36.090510454s" podCreationTimestamp="2026-02-19 05:41:08 +0000 UTC" firstStartedPulling="2026-02-19 05:41:25.164670975 +0000 UTC m=+981.197993584" lastFinishedPulling="2026-02-19 05:41:42.896267503 +0000 UTC m=+998.929590072" observedRunningTime="2026-02-19 05:41:44.082706048 +0000 UTC m=+1000.116028657" watchObservedRunningTime="2026-02-19 05:41:44.090510454 +0000 UTC m=+1000.123833063" Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.160191 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=25.684358249 podStartE2EDuration="32.160165654s" podCreationTimestamp="2026-02-19 05:41:12 +0000 UTC" firstStartedPulling="2026-02-19 05:41:36.438530442 +0000 UTC m=+992.471853021" lastFinishedPulling="2026-02-19 05:41:42.914337867 +0000 UTC m=+998.947660426" observedRunningTime="2026-02-19 05:41:44.113874941 +0000 UTC m=+1000.147197540" watchObservedRunningTime="2026-02-19 05:41:44.160165654 +0000 UTC m=+1000.193488233" Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.165193 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-7qdpg" podStartSLOduration=23.93139257 podStartE2EDuration="35.16518149s" podCreationTimestamp="2026-02-19 05:41:09 +0000 UTC" firstStartedPulling="2026-02-19 05:41:29.584229607 +0000 UTC m=+985.617552176" lastFinishedPulling="2026-02-19 05:41:40.818018517 +0000 UTC m=+996.851341096" observedRunningTime="2026-02-19 05:41:44.157263941 +0000 UTC m=+1000.190586530" watchObservedRunningTime="2026-02-19 05:41:44.16518149 +0000 UTC m=+1000.198504069" Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.619418 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.133506 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.185554 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.197512 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65886c9755-l2845"] Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.566275 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.690706 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-dns-svc\") pod \"57e2c914-87bd-46f8-92c7-e87437f6758a\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.691204 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "57e2c914-87bd-46f8-92c7-e87437f6758a" (UID: "57e2c914-87bd-46f8-92c7-e87437f6758a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.691391 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-config\") pod \"57e2c914-87bd-46f8-92c7-e87437f6758a\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.691644 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-config" (OuterVolumeSpecName: "config") pod "57e2c914-87bd-46f8-92c7-e87437f6758a" (UID: "57e2c914-87bd-46f8-92c7-e87437f6758a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.691696 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvbph\" (UniqueName: \"kubernetes.io/projected/57e2c914-87bd-46f8-92c7-e87437f6758a-kube-api-access-tvbph\") pod \"57e2c914-87bd-46f8-92c7-e87437f6758a\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.693151 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.693201 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.819186 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e2c914-87bd-46f8-92c7-e87437f6758a-kube-api-access-tvbph" (OuterVolumeSpecName: "kube-api-access-tvbph") pod "57e2c914-87bd-46f8-92c7-e87437f6758a" (UID: "57e2c914-87bd-46f8-92c7-e87437f6758a"). InnerVolumeSpecName "kube-api-access-tvbph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.896948 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvbph\" (UniqueName: \"kubernetes.io/projected/57e2c914-87bd-46f8-92c7-e87437f6758a-kube-api-access-tvbph\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.074827 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65886c9755-l2845" event={"ID":"57e2c914-87bd-46f8-92c7-e87437f6758a","Type":"ContainerDied","Data":"2ef7e60f2849b48568b2db26b7cbdccc8e5409326bef7865ef80fbf90f513b0e"} Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.074875 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.135376 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65886c9755-l2845"] Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.147564 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65886c9755-l2845"] Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.184526 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.232861 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.722717 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57e2c914-87bd-46f8-92c7-e87437f6758a" path="/var/lib/kubelet/pods/57e2c914-87bd-46f8-92c7-e87437f6758a/volumes" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.770002 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.779413 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-549878d5d7-z4hbd"] Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.781605 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.793867 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549878d5d7-z4hbd"] Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.858864 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.884035 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.937361 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-dns-svc\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.937431 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-config\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.937487 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt9v8\" (UniqueName: \"kubernetes.io/projected/87a37a10-9d54-42b4-b1ec-a841d2836207-kube-api-access-nt9v8\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.039429 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-config\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.039526 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt9v8\" (UniqueName: \"kubernetes.io/projected/87a37a10-9d54-42b4-b1ec-a841d2836207-kube-api-access-nt9v8\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.039620 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-dns-svc\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.040690 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-dns-svc\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.040735 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-config\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.057348 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt9v8\" (UniqueName: \"kubernetes.io/projected/87a37a10-9d54-42b4-b1ec-a841d2836207-kube-api-access-nt9v8\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.083059 5012 generic.go:334] "Generic (PLEG): container finished" podID="04466d10-2177-4361-bd86-333c046b9e52" containerID="819d4936f72ac1e78543d7aa12ff4e73a784d1c30ba80b58ffdf950f2bf4e356" exitCode=0 Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.083146 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"04466d10-2177-4361-bd86-333c046b9e52","Type":"ContainerDied","Data":"819d4936f72ac1e78543d7aa12ff4e73a784d1c30ba80b58ffdf950f2bf4e356"} Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.083667 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.133195 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.146750 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.146998 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.325934 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549878d5d7-z4hbd"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.340864 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-695d4f5557-sf54g"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.351110 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.356229 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.365753 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-695d4f5557-sf54g"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.426550 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-mz9j9"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.427895 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.431610 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.439680 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mz9j9"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.453419 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-config\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.453715 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b44m7\" (UniqueName: \"kubernetes.io/projected/11b0e720-e74b-43f8-b8f3-207b35594187-kube-api-access-b44m7\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.453900 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-ovsdbserver-sb\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.453950 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-dns-svc\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.497640 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-695d4f5557-sf54g"] Feb 19 05:41:47 crc kubenswrapper[5012]: E0219 05:41:47.498185 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-b44m7 ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-695d4f5557-sf54g" podUID="11b0e720-e74b-43f8-b8f3-207b35594187" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.540437 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bf9dcd95-lzm7b"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.543572 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.547331 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556343 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c711491e-0b8b-4737-88c9-bc5e37051ac1-ovn-rundir\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556409 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-config\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556450 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr4sn\" (UniqueName: \"kubernetes.io/projected/c711491e-0b8b-4737-88c9-bc5e37051ac1-kube-api-access-mr4sn\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556483 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b44m7\" (UniqueName: \"kubernetes.io/projected/11b0e720-e74b-43f8-b8f3-207b35594187-kube-api-access-b44m7\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556499 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c711491e-0b8b-4737-88c9-bc5e37051ac1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556529 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c711491e-0b8b-4737-88c9-bc5e37051ac1-combined-ca-bundle\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556554 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c711491e-0b8b-4737-88c9-bc5e37051ac1-ovs-rundir\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556572 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-ovsdbserver-sb\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556589 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-dns-svc\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556620 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c711491e-0b8b-4737-88c9-bc5e37051ac1-config\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.557467 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-config\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.557902 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-ovsdbserver-sb\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.558040 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-dns-svc\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.575360 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bf9dcd95-lzm7b"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.584021 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b44m7\" (UniqueName: \"kubernetes.io/projected/11b0e720-e74b-43f8-b8f3-207b35594187-kube-api-access-b44m7\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.618212 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.619706 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.633480 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.633885 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-plwmh" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.634133 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.634146 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.640873 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660204 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c711491e-0b8b-4737-88c9-bc5e37051ac1-combined-ca-bundle\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660256 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c711491e-0b8b-4737-88c9-bc5e37051ac1-ovs-rundir\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660287 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-sb\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660323 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660358 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c711491e-0b8b-4737-88c9-bc5e37051ac1-config\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660400 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-config\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660431 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c711491e-0b8b-4737-88c9-bc5e37051ac1-ovn-rundir\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660450 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25brm\" (UniqueName: \"kubernetes.io/projected/f22ec0c5-41a9-4f36-adb0-405e5a26d209-kube-api-access-25brm\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660492 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-nb\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660516 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660537 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-scripts\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660558 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr4sn\" (UniqueName: \"kubernetes.io/projected/c711491e-0b8b-4737-88c9-bc5e37051ac1-kube-api-access-mr4sn\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660573 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660598 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660617 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-dns-svc\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660633 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c711491e-0b8b-4737-88c9-bc5e37051ac1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660652 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw57d\" (UniqueName: \"kubernetes.io/projected/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-kube-api-access-bw57d\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660671 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-config\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.661450 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c711491e-0b8b-4737-88c9-bc5e37051ac1-config\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.661673 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c711491e-0b8b-4737-88c9-bc5e37051ac1-ovn-rundir\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.662664 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c711491e-0b8b-4737-88c9-bc5e37051ac1-ovs-rundir\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.676366 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c711491e-0b8b-4737-88c9-bc5e37051ac1-combined-ca-bundle\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.676777 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c711491e-0b8b-4737-88c9-bc5e37051ac1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.690824 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr4sn\" (UniqueName: \"kubernetes.io/projected/c711491e-0b8b-4737-88c9-bc5e37051ac1-kube-api-access-mr4sn\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.735660 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549878d5d7-z4hbd"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.745609 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.761778 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.761828 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-scripts\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.761854 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.761919 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.761936 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-dns-svc\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.761979 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw57d\" (UniqueName: \"kubernetes.io/projected/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-kube-api-access-bw57d\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.762006 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-config\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.762079 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-sb\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.762098 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.762159 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-config\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.762196 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25brm\" (UniqueName: \"kubernetes.io/projected/f22ec0c5-41a9-4f36-adb0-405e5a26d209-kube-api-access-25brm\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.762236 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-nb\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.763106 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-nb\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.763704 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-scripts\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.765039 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-config\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.766829 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-dns-svc\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.771729 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.771877 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-config\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.774006 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.774211 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-sb\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.775972 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.776436 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.778060 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw57d\" (UniqueName: \"kubernetes.io/projected/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-kube-api-access-bw57d\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.782478 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25brm\" (UniqueName: \"kubernetes.io/projected/f22ec0c5-41a9-4f36-adb0-405e5a26d209-kube-api-access-25brm\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.859372 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.969334 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.976929 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.983145 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.988231 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.988451 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-q58gk" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.988578 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.988677 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.004155 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.074787 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.074869 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c089afc3-1655-4675-b4e1-a62ec6929498-lock\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.074905 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c089afc3-1655-4675-b4e1-a62ec6929498-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.074995 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zgg5\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-kube-api-access-8zgg5\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.075062 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c089afc3-1655-4675-b4e1-a62ec6929498-cache\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.075101 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.113126 5012 generic.go:334] "Generic (PLEG): container finished" podID="87a37a10-9d54-42b4-b1ec-a841d2836207" containerID="b4e25c113c7481f26d7d1cc0e975114480050ead6685f86ef56b5ca5e4c0cc32" exitCode=0 Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.113358 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" event={"ID":"87a37a10-9d54-42b4-b1ec-a841d2836207","Type":"ContainerDied","Data":"b4e25c113c7481f26d7d1cc0e975114480050ead6685f86ef56b5ca5e4c0cc32"} Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.113387 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" event={"ID":"87a37a10-9d54-42b4-b1ec-a841d2836207","Type":"ContainerStarted","Data":"9cf59006836035d8afc016102789032733760d2ec7f8061587b620acf3488db0"} Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.130235 5012 generic.go:334] "Generic (PLEG): container finished" podID="1fd0c672-e258-4feb-8bbd-26135f92f7fb" containerID="a8e754bcf301635d8dc3f5a9e704295059c792c19bbabbb8a572e39943ecb2ef" exitCode=0 Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.130294 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1fd0c672-e258-4feb-8bbd-26135f92f7fb","Type":"ContainerDied","Data":"a8e754bcf301635d8dc3f5a9e704295059c792c19bbabbb8a572e39943ecb2ef"} Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.145601 5012 generic.go:334] "Generic (PLEG): container finished" podID="1e31edbd-c20b-420d-8888-cafb392410cd" containerID="2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6" exitCode=0 Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.145687 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerDied","Data":"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6"} Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.158439 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"04466d10-2177-4361-bd86-333c046b9e52","Type":"ContainerStarted","Data":"10b22f8ff53536eaa0e8f250f73cdee88b2784d8c00c54045e1b0d74df53d3e0"} Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.158624 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.198946 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c089afc3-1655-4675-b4e1-a62ec6929498-cache\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.199050 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.199106 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.199179 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c089afc3-1655-4675-b4e1-a62ec6929498-lock\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.199238 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c089afc3-1655-4675-b4e1-a62ec6929498-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.199357 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zgg5\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-kube-api-access-8zgg5\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.200473 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c089afc3-1655-4675-b4e1-a62ec6929498-cache\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.201362 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.206061 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c089afc3-1655-4675-b4e1-a62ec6929498-lock\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: E0219 05:41:48.206753 5012 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 05:41:48 crc kubenswrapper[5012]: E0219 05:41:48.206827 5012 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 05:41:48 crc kubenswrapper[5012]: E0219 05:41:48.206922 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift podName:c089afc3-1655-4675-b4e1-a62ec6929498 nodeName:}" failed. No retries permitted until 2026-02-19 05:41:48.706906888 +0000 UTC m=+1004.740229457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift") pod "swift-storage-0" (UID: "c089afc3-1655-4675-b4e1-a62ec6929498") : configmap "swift-ring-files" not found Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.214976 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=13.816273362 podStartE2EDuration="46.21495414s" podCreationTimestamp="2026-02-19 05:41:02 +0000 UTC" firstStartedPulling="2026-02-19 05:41:04.963508136 +0000 UTC m=+960.996830695" lastFinishedPulling="2026-02-19 05:41:37.362188904 +0000 UTC m=+993.395511473" observedRunningTime="2026-02-19 05:41:48.203611205 +0000 UTC m=+1004.236933794" watchObservedRunningTime="2026-02-19 05:41:48.21495414 +0000 UTC m=+1004.248276709" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.227884 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c089afc3-1655-4675-b4e1-a62ec6929498-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.237971 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mz9j9"] Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.244445 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zgg5\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-kube-api-access-8zgg5\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.251200 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.299193 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.357678 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bf9dcd95-lzm7b"] Feb 19 05:41:48 crc kubenswrapper[5012]: W0219 05:41:48.387929 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf22ec0c5_41a9_4f36_adb0_405e5a26d209.slice/crio-6bd9704878ce796ee545aeab88709706e42c6cb9f878bf8b26a1785cb4cf93bf WatchSource:0}: Error finding container 6bd9704878ce796ee545aeab88709706e42c6cb9f878bf8b26a1785cb4cf93bf: Status 404 returned error can't find the container with id 6bd9704878ce796ee545aeab88709706e42c6cb9f878bf8b26a1785cb4cf93bf Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.402812 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-config" (OuterVolumeSpecName: "config") pod "11b0e720-e74b-43f8-b8f3-207b35594187" (UID: "11b0e720-e74b-43f8-b8f3-207b35594187"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.402875 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-config\") pod \"11b0e720-e74b-43f8-b8f3-207b35594187\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.403049 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-ovsdbserver-sb\") pod \"11b0e720-e74b-43f8-b8f3-207b35594187\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.403167 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-dns-svc\") pod \"11b0e720-e74b-43f8-b8f3-207b35594187\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.403745 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "11b0e720-e74b-43f8-b8f3-207b35594187" (UID: "11b0e720-e74b-43f8-b8f3-207b35594187"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.404065 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11b0e720-e74b-43f8-b8f3-207b35594187" (UID: "11b0e720-e74b-43f8-b8f3-207b35594187"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.404461 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b44m7\" (UniqueName: \"kubernetes.io/projected/11b0e720-e74b-43f8-b8f3-207b35594187-kube-api-access-b44m7\") pod \"11b0e720-e74b-43f8-b8f3-207b35594187\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.408933 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b0e720-e74b-43f8-b8f3-207b35594187-kube-api-access-b44m7" (OuterVolumeSpecName: "kube-api-access-b44m7") pod "11b0e720-e74b-43f8-b8f3-207b35594187" (UID: "11b0e720-e74b-43f8-b8f3-207b35594187"). InnerVolumeSpecName "kube-api-access-b44m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.409562 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.409613 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.409629 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.468443 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.511827 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b44m7\" (UniqueName: \"kubernetes.io/projected/11b0e720-e74b-43f8-b8f3-207b35594187-kube-api-access-b44m7\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.521482 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 05:41:48 crc kubenswrapper[5012]: W0219 05:41:48.533410 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3e8f67d_0748_4bff_b7c5_8432c7e4ab64.slice/crio-32eb408fc4e94735fe1da4222ae269096a83a1f02f502a8aa984b5e84249b30f WatchSource:0}: Error finding container 32eb408fc4e94735fe1da4222ae269096a83a1f02f502a8aa984b5e84249b30f: Status 404 returned error can't find the container with id 32eb408fc4e94735fe1da4222ae269096a83a1f02f502a8aa984b5e84249b30f Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.613485 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-config\") pod \"87a37a10-9d54-42b4-b1ec-a841d2836207\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.613759 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-dns-svc\") pod \"87a37a10-9d54-42b4-b1ec-a841d2836207\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.613890 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt9v8\" (UniqueName: \"kubernetes.io/projected/87a37a10-9d54-42b4-b1ec-a841d2836207-kube-api-access-nt9v8\") pod \"87a37a10-9d54-42b4-b1ec-a841d2836207\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.620440 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a37a10-9d54-42b4-b1ec-a841d2836207-kube-api-access-nt9v8" (OuterVolumeSpecName: "kube-api-access-nt9v8") pod "87a37a10-9d54-42b4-b1ec-a841d2836207" (UID: "87a37a10-9d54-42b4-b1ec-a841d2836207"). InnerVolumeSpecName "kube-api-access-nt9v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.632904 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-config" (OuterVolumeSpecName: "config") pod "87a37a10-9d54-42b4-b1ec-a841d2836207" (UID: "87a37a10-9d54-42b4-b1ec-a841d2836207"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.633548 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "87a37a10-9d54-42b4-b1ec-a841d2836207" (UID: "87a37a10-9d54-42b4-b1ec-a841d2836207"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.715981 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.716080 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.716097 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt9v8\" (UniqueName: \"kubernetes.io/projected/87a37a10-9d54-42b4-b1ec-a841d2836207-kube-api-access-nt9v8\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.716111 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:48 crc kubenswrapper[5012]: E0219 05:41:48.716280 5012 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 05:41:48 crc kubenswrapper[5012]: E0219 05:41:48.716346 5012 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 05:41:48 crc kubenswrapper[5012]: E0219 05:41:48.716443 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift podName:c089afc3-1655-4675-b4e1-a62ec6929498 nodeName:}" failed. No retries permitted until 2026-02-19 05:41:49.716412221 +0000 UTC m=+1005.749734780 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift") pod "swift-storage-0" (UID: "c089afc3-1655-4675-b4e1-a62ec6929498") : configmap "swift-ring-files" not found Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.165731 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mz9j9" event={"ID":"c711491e-0b8b-4737-88c9-bc5e37051ac1","Type":"ContainerStarted","Data":"aab06d4f2c3375336ad944f107bcde4a55eead8b6008771d38c6fab07f604ea7"} Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.166118 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mz9j9" event={"ID":"c711491e-0b8b-4737-88c9-bc5e37051ac1","Type":"ContainerStarted","Data":"dd9b7de8fbd16d70fd18f28a60fbf2c541534b56ead82da3a614566b1be7ec6e"} Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.168011 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" event={"ID":"87a37a10-9d54-42b4-b1ec-a841d2836207","Type":"ContainerDied","Data":"9cf59006836035d8afc016102789032733760d2ec7f8061587b620acf3488db0"} Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.168063 5012 scope.go:117] "RemoveContainer" containerID="b4e25c113c7481f26d7d1cc0e975114480050ead6685f86ef56b5ca5e4c0cc32" Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.168119 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.172047 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1fd0c672-e258-4feb-8bbd-26135f92f7fb","Type":"ContainerStarted","Data":"b8e7ff8da781605df4b50e10d2845af44ab26f79df0749f6633a5df64e1cdeaa"} Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.173646 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64","Type":"ContainerStarted","Data":"32eb408fc4e94735fe1da4222ae269096a83a1f02f502a8aa984b5e84249b30f"} Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.175202 5012 generic.go:334] "Generic (PLEG): container finished" podID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerID="d961f9b5d55a9bfaff596c3b756f78502ea40069f8fb1a18443be8e579f64c1b" exitCode=0 Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.175268 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.175250 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" event={"ID":"f22ec0c5-41a9-4f36-adb0-405e5a26d209","Type":"ContainerDied","Data":"d961f9b5d55a9bfaff596c3b756f78502ea40069f8fb1a18443be8e579f64c1b"} Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.175380 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" event={"ID":"f22ec0c5-41a9-4f36-adb0-405e5a26d209","Type":"ContainerStarted","Data":"6bd9704878ce796ee545aeab88709706e42c6cb9f878bf8b26a1785cb4cf93bf"} Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.236943 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=14.271588441 podStartE2EDuration="48.236910101s" podCreationTimestamp="2026-02-19 05:41:01 +0000 UTC" firstStartedPulling="2026-02-19 05:41:03.343120283 +0000 UTC m=+959.376442852" lastFinishedPulling="2026-02-19 05:41:37.308441953 +0000 UTC m=+993.341764512" observedRunningTime="2026-02-19 05:41:49.235736642 +0000 UTC m=+1005.269059211" watchObservedRunningTime="2026-02-19 05:41:49.236910101 +0000 UTC m=+1005.270232670" Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.241732 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-mz9j9" podStartSLOduration=2.241725592 podStartE2EDuration="2.241725592s" podCreationTimestamp="2026-02-19 05:41:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:41:49.189209973 +0000 UTC m=+1005.222532542" watchObservedRunningTime="2026-02-19 05:41:49.241725592 +0000 UTC m=+1005.275048161" Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.312365 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-695d4f5557-sf54g"] Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.337650 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-695d4f5557-sf54g"] Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.343858 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549878d5d7-z4hbd"] Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.349202 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-549878d5d7-z4hbd"] Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.735185 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:49 crc kubenswrapper[5012]: E0219 05:41:49.736629 5012 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 05:41:49 crc kubenswrapper[5012]: E0219 05:41:49.736644 5012 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 05:41:49 crc kubenswrapper[5012]: E0219 05:41:49.736680 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift podName:c089afc3-1655-4675-b4e1-a62ec6929498 nodeName:}" failed. No retries permitted until 2026-02-19 05:41:51.73666657 +0000 UTC m=+1007.769989139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift") pod "swift-storage-0" (UID: "c089afc3-1655-4675-b4e1-a62ec6929498") : configmap "swift-ring-files" not found Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.209819 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64","Type":"ContainerStarted","Data":"eda44d9bc80983dd7021f281b18a7b62b552db2c1bc972d5c8c5f911d7a7d392"} Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.209871 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64","Type":"ContainerStarted","Data":"c0db4f523c9f822a4d993669cc5337e8476abe653273676901f2ae54825cbf26"} Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.210396 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.227295 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" event={"ID":"f22ec0c5-41a9-4f36-adb0-405e5a26d209","Type":"ContainerStarted","Data":"7ff9e9710973d65273f4c7d1b2b07184b8147f2ccbf37eac212553af6a1fa77e"} Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.228125 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.239906 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.339218161 podStartE2EDuration="3.239887005s" podCreationTimestamp="2026-02-19 05:41:47 +0000 UTC" firstStartedPulling="2026-02-19 05:41:48.535813193 +0000 UTC m=+1004.569135762" lastFinishedPulling="2026-02-19 05:41:49.436482037 +0000 UTC m=+1005.469804606" observedRunningTime="2026-02-19 05:41:50.239567437 +0000 UTC m=+1006.272889996" watchObservedRunningTime="2026-02-19 05:41:50.239887005 +0000 UTC m=+1006.273209574" Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.715281 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11b0e720-e74b-43f8-b8f3-207b35594187" path="/var/lib/kubelet/pods/11b0e720-e74b-43f8-b8f3-207b35594187/volumes" Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.716451 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87a37a10-9d54-42b4-b1ec-a841d2836207" path="/var/lib/kubelet/pods/87a37a10-9d54-42b4-b1ec-a841d2836207/volumes" Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.738413 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" podStartSLOduration=3.7383896720000003 podStartE2EDuration="3.738389672s" podCreationTimestamp="2026-02-19 05:41:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:41:50.271877309 +0000 UTC m=+1006.305199878" watchObservedRunningTime="2026-02-19 05:41:50.738389672 +0000 UTC m=+1006.771712251" Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.797269 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:51 crc kubenswrapper[5012]: E0219 05:41:51.797469 5012 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 05:41:51 crc kubenswrapper[5012]: E0219 05:41:51.797728 5012 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 05:41:51 crc kubenswrapper[5012]: E0219 05:41:51.797799 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift podName:c089afc3-1655-4675-b4e1-a62ec6929498 nodeName:}" failed. No retries permitted until 2026-02-19 05:41:55.797774455 +0000 UTC m=+1011.831097114 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift") pod "swift-storage-0" (UID: "c089afc3-1655-4675-b4e1-a62ec6929498") : configmap "swift-ring-files" not found Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.887751 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-5vxhd"] Feb 19 05:41:51 crc kubenswrapper[5012]: E0219 05:41:51.888073 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a37a10-9d54-42b4-b1ec-a841d2836207" containerName="init" Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.888087 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a37a10-9d54-42b4-b1ec-a841d2836207" containerName="init" Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.888285 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a37a10-9d54-42b4-b1ec-a841d2836207" containerName="init" Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.888891 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.891106 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.891140 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.891115 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.928413 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5vxhd"] Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.001992 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szrf5\" (UniqueName: \"kubernetes.io/projected/d05da3bc-6c22-4956-9fab-331eed79d175-kube-api-access-szrf5\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.002032 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-dispersionconf\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.002224 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d05da3bc-6c22-4956-9fab-331eed79d175-etc-swift\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.002328 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-swiftconf\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.002354 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-scripts\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.002370 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-ring-data-devices\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.002455 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-combined-ca-bundle\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.104366 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-swiftconf\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.104415 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-scripts\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.104436 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-ring-data-devices\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.104474 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-combined-ca-bundle\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.104566 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szrf5\" (UniqueName: \"kubernetes.io/projected/d05da3bc-6c22-4956-9fab-331eed79d175-kube-api-access-szrf5\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.104585 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-dispersionconf\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.104652 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d05da3bc-6c22-4956-9fab-331eed79d175-etc-swift\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.105118 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d05da3bc-6c22-4956-9fab-331eed79d175-etc-swift\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.105711 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-ring-data-devices\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.105820 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-scripts\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.110789 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-combined-ca-bundle\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.113824 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-dispersionconf\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.114284 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-swiftconf\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.133221 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szrf5\" (UniqueName: \"kubernetes.io/projected/d05da3bc-6c22-4956-9fab-331eed79d175-kube-api-access-szrf5\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.208537 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.253282 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13d3004-2045-4daf-a925-7eccf541b1b4","Type":"ContainerStarted","Data":"0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e"} Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.660071 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.660524 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.691508 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5vxhd"] Feb 19 05:41:53 crc kubenswrapper[5012]: I0219 05:41:53.331369 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 05:41:53 crc kubenswrapper[5012]: I0219 05:41:53.468566 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.399988 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.400043 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.525324 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.910636 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-r8ddf"] Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.911674 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r8ddf" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.923589 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-91bd-account-create-update-54r7l"] Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.924922 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.926407 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.934359 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-r8ddf"] Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.950258 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a75d3b-186a-41d6-92a8-94729c520aa5-operator-scripts\") pod \"glance-91bd-account-create-update-54r7l\" (UID: \"90a75d3b-186a-41d6-92a8-94729c520aa5\") " pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.950352 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mplk\" (UniqueName: \"kubernetes.io/projected/90a75d3b-186a-41d6-92a8-94729c520aa5-kube-api-access-4mplk\") pod \"glance-91bd-account-create-update-54r7l\" (UID: \"90a75d3b-186a-41d6-92a8-94729c520aa5\") " pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.950396 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e3020d-901d-4649-9e94-c5c0a4cc523d-operator-scripts\") pod \"glance-db-create-r8ddf\" (UID: \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\") " pod="openstack/glance-db-create-r8ddf" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.950434 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sttkj\" (UniqueName: \"kubernetes.io/projected/e1e3020d-901d-4649-9e94-c5c0a4cc523d-kube-api-access-sttkj\") pod \"glance-db-create-r8ddf\" (UID: \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\") " pod="openstack/glance-db-create-r8ddf" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.953568 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-91bd-account-create-update-54r7l"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.051738 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sttkj\" (UniqueName: \"kubernetes.io/projected/e1e3020d-901d-4649-9e94-c5c0a4cc523d-kube-api-access-sttkj\") pod \"glance-db-create-r8ddf\" (UID: \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\") " pod="openstack/glance-db-create-r8ddf" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.051836 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a75d3b-186a-41d6-92a8-94729c520aa5-operator-scripts\") pod \"glance-91bd-account-create-update-54r7l\" (UID: \"90a75d3b-186a-41d6-92a8-94729c520aa5\") " pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.051921 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mplk\" (UniqueName: \"kubernetes.io/projected/90a75d3b-186a-41d6-92a8-94729c520aa5-kube-api-access-4mplk\") pod \"glance-91bd-account-create-update-54r7l\" (UID: \"90a75d3b-186a-41d6-92a8-94729c520aa5\") " pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.051985 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e3020d-901d-4649-9e94-c5c0a4cc523d-operator-scripts\") pod \"glance-db-create-r8ddf\" (UID: \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\") " pod="openstack/glance-db-create-r8ddf" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.052690 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a75d3b-186a-41d6-92a8-94729c520aa5-operator-scripts\") pod \"glance-91bd-account-create-update-54r7l\" (UID: \"90a75d3b-186a-41d6-92a8-94729c520aa5\") " pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.053830 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e3020d-901d-4649-9e94-c5c0a4cc523d-operator-scripts\") pod \"glance-db-create-r8ddf\" (UID: \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\") " pod="openstack/glance-db-create-r8ddf" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.068581 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sttkj\" (UniqueName: \"kubernetes.io/projected/e1e3020d-901d-4649-9e94-c5c0a4cc523d-kube-api-access-sttkj\") pod \"glance-db-create-r8ddf\" (UID: \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\") " pod="openstack/glance-db-create-r8ddf" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.075950 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mplk\" (UniqueName: \"kubernetes.io/projected/90a75d3b-186a-41d6-92a8-94729c520aa5-kube-api-access-4mplk\") pod \"glance-91bd-account-create-update-54r7l\" (UID: \"90a75d3b-186a-41d6-92a8-94729c520aa5\") " pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.230107 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r8ddf" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.241362 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.312027 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5vxhd" event={"ID":"d05da3bc-6c22-4956-9fab-331eed79d175","Type":"ContainerStarted","Data":"474f2d807b97d130f773ae47927296219b201d325e7ae32ec13971a56bf04456"} Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.514433 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.572344 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jktc7"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.574108 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jktc7" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.578280 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jktc7"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.663942 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-operator-scripts\") pod \"keystone-db-create-jktc7\" (UID: \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\") " pod="openstack/keystone-db-create-jktc7" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.663987 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28csb\" (UniqueName: \"kubernetes.io/projected/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-kube-api-access-28csb\") pod \"keystone-db-create-jktc7\" (UID: \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\") " pod="openstack/keystone-db-create-jktc7" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.665569 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b5f0-account-create-update-l7b8m"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.677731 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b5f0-account-create-update-l7b8m"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.677823 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.684241 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.758316 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hthfx"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.759883 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hthfx" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.767165 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-operator-scripts\") pod \"keystone-db-create-jktc7\" (UID: \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\") " pod="openstack/keystone-db-create-jktc7" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.767202 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28csb\" (UniqueName: \"kubernetes.io/projected/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-kube-api-access-28csb\") pod \"keystone-db-create-jktc7\" (UID: \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\") " pod="openstack/keystone-db-create-jktc7" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.775583 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-operator-scripts\") pod \"keystone-db-create-jktc7\" (UID: \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\") " pod="openstack/keystone-db-create-jktc7" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.783161 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hthfx"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.797760 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28csb\" (UniqueName: \"kubernetes.io/projected/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-kube-api-access-28csb\") pod \"keystone-db-create-jktc7\" (UID: \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\") " pod="openstack/keystone-db-create-jktc7" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.860353 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-22e2-account-create-update-vddht"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.864940 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.869412 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt2lm\" (UniqueName: \"kubernetes.io/projected/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-kube-api-access-nt2lm\") pod \"placement-db-create-hthfx\" (UID: \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\") " pod="openstack/placement-db-create-hthfx" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.869763 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.870670 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqkd8\" (UniqueName: \"kubernetes.io/projected/533d4699-332c-4ceb-ad6e-77c680699214-kube-api-access-fqkd8\") pod \"keystone-b5f0-account-create-update-l7b8m\" (UID: \"533d4699-332c-4ceb-ad6e-77c680699214\") " pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.870700 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/533d4699-332c-4ceb-ad6e-77c680699214-operator-scripts\") pod \"keystone-b5f0-account-create-update-l7b8m\" (UID: \"533d4699-332c-4ceb-ad6e-77c680699214\") " pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.870727 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-operator-scripts\") pod \"placement-db-create-hthfx\" (UID: \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\") " pod="openstack/placement-db-create-hthfx" Feb 19 05:41:55 crc kubenswrapper[5012]: E0219 05:41:55.870862 5012 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 05:41:55 crc kubenswrapper[5012]: E0219 05:41:55.870875 5012 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 05:41:55 crc kubenswrapper[5012]: E0219 05:41:55.870911 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift podName:c089afc3-1655-4675-b4e1-a62ec6929498 nodeName:}" failed. No retries permitted until 2026-02-19 05:42:03.870898151 +0000 UTC m=+1019.904220720 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift") pod "swift-storage-0" (UID: "c089afc3-1655-4675-b4e1-a62ec6929498") : configmap "swift-ring-files" not found Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.875378 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-22e2-account-create-update-vddht"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.882595 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.907646 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jktc7" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.934856 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-r8ddf"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.973244 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kq4b\" (UniqueName: \"kubernetes.io/projected/d1e7d95a-d78a-4d54-a66b-565114b4823e-kube-api-access-5kq4b\") pod \"placement-22e2-account-create-update-vddht\" (UID: \"d1e7d95a-d78a-4d54-a66b-565114b4823e\") " pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.973330 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqkd8\" (UniqueName: \"kubernetes.io/projected/533d4699-332c-4ceb-ad6e-77c680699214-kube-api-access-fqkd8\") pod \"keystone-b5f0-account-create-update-l7b8m\" (UID: \"533d4699-332c-4ceb-ad6e-77c680699214\") " pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.973361 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/533d4699-332c-4ceb-ad6e-77c680699214-operator-scripts\") pod \"keystone-b5f0-account-create-update-l7b8m\" (UID: \"533d4699-332c-4ceb-ad6e-77c680699214\") " pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.973383 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e7d95a-d78a-4d54-a66b-565114b4823e-operator-scripts\") pod \"placement-22e2-account-create-update-vddht\" (UID: \"d1e7d95a-d78a-4d54-a66b-565114b4823e\") " pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.973407 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-operator-scripts\") pod \"placement-db-create-hthfx\" (UID: \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\") " pod="openstack/placement-db-create-hthfx" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.973473 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt2lm\" (UniqueName: \"kubernetes.io/projected/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-kube-api-access-nt2lm\") pod \"placement-db-create-hthfx\" (UID: \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\") " pod="openstack/placement-db-create-hthfx" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.974356 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-operator-scripts\") pod \"placement-db-create-hthfx\" (UID: \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\") " pod="openstack/placement-db-create-hthfx" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.974641 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/533d4699-332c-4ceb-ad6e-77c680699214-operator-scripts\") pod \"keystone-b5f0-account-create-update-l7b8m\" (UID: \"533d4699-332c-4ceb-ad6e-77c680699214\") " pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:55.999992 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt2lm\" (UniqueName: \"kubernetes.io/projected/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-kube-api-access-nt2lm\") pod \"placement-db-create-hthfx\" (UID: \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\") " pod="openstack/placement-db-create-hthfx" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.003839 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqkd8\" (UniqueName: \"kubernetes.io/projected/533d4699-332c-4ceb-ad6e-77c680699214-kube-api-access-fqkd8\") pod \"keystone-b5f0-account-create-update-l7b8m\" (UID: \"533d4699-332c-4ceb-ad6e-77c680699214\") " pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.019954 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-91bd-account-create-update-54r7l"] Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.075598 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kq4b\" (UniqueName: \"kubernetes.io/projected/d1e7d95a-d78a-4d54-a66b-565114b4823e-kube-api-access-5kq4b\") pod \"placement-22e2-account-create-update-vddht\" (UID: \"d1e7d95a-d78a-4d54-a66b-565114b4823e\") " pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.076139 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e7d95a-d78a-4d54-a66b-565114b4823e-operator-scripts\") pod \"placement-22e2-account-create-update-vddht\" (UID: \"d1e7d95a-d78a-4d54-a66b-565114b4823e\") " pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.076783 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e7d95a-d78a-4d54-a66b-565114b4823e-operator-scripts\") pod \"placement-22e2-account-create-update-vddht\" (UID: \"d1e7d95a-d78a-4d54-a66b-565114b4823e\") " pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.106207 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hthfx" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.130205 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kq4b\" (UniqueName: \"kubernetes.io/projected/d1e7d95a-d78a-4d54-a66b-565114b4823e-kube-api-access-5kq4b\") pod \"placement-22e2-account-create-update-vddht\" (UID: \"d1e7d95a-d78a-4d54-a66b-565114b4823e\") " pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.184384 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.296473 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.341662 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerStarted","Data":"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c"} Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.343312 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r8ddf" event={"ID":"e1e3020d-901d-4649-9e94-c5c0a4cc523d","Type":"ContainerStarted","Data":"41cb74d66ab3e64634057788877cc78c4b6583899219dd015cbf188304216e08"} Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.345019 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-91bd-account-create-update-54r7l" event={"ID":"90a75d3b-186a-41d6-92a8-94729c520aa5","Type":"ContainerStarted","Data":"2cc8742fd7eb09f99450e71f46c7d9913eee7444573c215e9049c6c4deb3c4af"} Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.452272 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jktc7"] Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.561721 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-22e2-account-create-update-vddht"] Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.578391 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hthfx"] Feb 19 05:41:56 crc kubenswrapper[5012]: W0219 05:41:56.589686 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1e7d95a_d78a_4d54_a66b_565114b4823e.slice/crio-415948d38f4e1e498d619eb6a6b2469946c3c60046c19bbad1963803a9a9ee0e WatchSource:0}: Error finding container 415948d38f4e1e498d619eb6a6b2469946c3c60046c19bbad1963803a9a9ee0e: Status 404 returned error can't find the container with id 415948d38f4e1e498d619eb6a6b2469946c3c60046c19bbad1963803a9a9ee0e Feb 19 05:41:56 crc kubenswrapper[5012]: W0219 05:41:56.595448 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f5d1fc5_7a37_4ed2_86d6_7e0689c7b65a.slice/crio-32ae99cc9db8c5e5c207480404d461eddd26622f90d7889b9a998d9df04ee55b WatchSource:0}: Error finding container 32ae99cc9db8c5e5c207480404d461eddd26622f90d7889b9a998d9df04ee55b: Status 404 returned error can't find the container with id 32ae99cc9db8c5e5c207480404d461eddd26622f90d7889b9a998d9df04ee55b Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.751990 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-vjzm9"] Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.752942 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-vjzm9"] Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.753022 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-vjzm9" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.826525 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-34a7-account-create-update-84f2g"] Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.829142 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.835103 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.845016 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-34a7-account-create-update-84f2g"] Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.877375 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b5f0-account-create-update-l7b8m"] Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.891253 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l9wv\" (UniqueName: \"kubernetes.io/projected/a973520b-997d-4c23-a056-590c96123e43-kube-api-access-4l9wv\") pod \"watcher-db-create-vjzm9\" (UID: \"a973520b-997d-4c23-a056-590c96123e43\") " pod="openstack/watcher-db-create-vjzm9" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.891345 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a973520b-997d-4c23-a056-590c96123e43-operator-scripts\") pod \"watcher-db-create-vjzm9\" (UID: \"a973520b-997d-4c23-a056-590c96123e43\") " pod="openstack/watcher-db-create-vjzm9" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.992704 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e45e098-f689-4015-9871-5f66e5d7bef1-operator-scripts\") pod \"watcher-34a7-account-create-update-84f2g\" (UID: \"6e45e098-f689-4015-9871-5f66e5d7bef1\") " pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.992786 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld6zd\" (UniqueName: \"kubernetes.io/projected/6e45e098-f689-4015-9871-5f66e5d7bef1-kube-api-access-ld6zd\") pod \"watcher-34a7-account-create-update-84f2g\" (UID: \"6e45e098-f689-4015-9871-5f66e5d7bef1\") " pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.992832 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l9wv\" (UniqueName: \"kubernetes.io/projected/a973520b-997d-4c23-a056-590c96123e43-kube-api-access-4l9wv\") pod \"watcher-db-create-vjzm9\" (UID: \"a973520b-997d-4c23-a056-590c96123e43\") " pod="openstack/watcher-db-create-vjzm9" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.992881 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a973520b-997d-4c23-a056-590c96123e43-operator-scripts\") pod \"watcher-db-create-vjzm9\" (UID: \"a973520b-997d-4c23-a056-590c96123e43\") " pod="openstack/watcher-db-create-vjzm9" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.993607 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a973520b-997d-4c23-a056-590c96123e43-operator-scripts\") pod \"watcher-db-create-vjzm9\" (UID: \"a973520b-997d-4c23-a056-590c96123e43\") " pod="openstack/watcher-db-create-vjzm9" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.094177 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e45e098-f689-4015-9871-5f66e5d7bef1-operator-scripts\") pod \"watcher-34a7-account-create-update-84f2g\" (UID: \"6e45e098-f689-4015-9871-5f66e5d7bef1\") " pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.094330 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld6zd\" (UniqueName: \"kubernetes.io/projected/6e45e098-f689-4015-9871-5f66e5d7bef1-kube-api-access-ld6zd\") pod \"watcher-34a7-account-create-update-84f2g\" (UID: \"6e45e098-f689-4015-9871-5f66e5d7bef1\") " pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.095609 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e45e098-f689-4015-9871-5f66e5d7bef1-operator-scripts\") pod \"watcher-34a7-account-create-update-84f2g\" (UID: \"6e45e098-f689-4015-9871-5f66e5d7bef1\") " pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.126924 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld6zd\" (UniqueName: \"kubernetes.io/projected/6e45e098-f689-4015-9871-5f66e5d7bef1-kube-api-access-ld6zd\") pod \"watcher-34a7-account-create-update-84f2g\" (UID: \"6e45e098-f689-4015-9871-5f66e5d7bef1\") " pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.134028 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l9wv\" (UniqueName: \"kubernetes.io/projected/a973520b-997d-4c23-a056-590c96123e43-kube-api-access-4l9wv\") pod \"watcher-db-create-vjzm9\" (UID: \"a973520b-997d-4c23-a056-590c96123e43\") " pod="openstack/watcher-db-create-vjzm9" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.357196 5012 generic.go:334] "Generic (PLEG): container finished" podID="6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a" containerID="573a87d5e8e95277642af154eba731e6d506fbe9be8db1436f41349ffe7bcbd4" exitCode=0 Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.357269 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hthfx" event={"ID":"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a","Type":"ContainerDied","Data":"573a87d5e8e95277642af154eba731e6d506fbe9be8db1436f41349ffe7bcbd4"} Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.357330 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hthfx" event={"ID":"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a","Type":"ContainerStarted","Data":"32ae99cc9db8c5e5c207480404d461eddd26622f90d7889b9a998d9df04ee55b"} Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.359537 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jktc7" event={"ID":"12f3008a-413a-4fe7-b3c1-773c10b6b2bf","Type":"ContainerStarted","Data":"7eb12edfddf61f27034bd898f26189ecda10cab4ae6f1560fd50a310988165c4"} Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.362259 5012 generic.go:334] "Generic (PLEG): container finished" podID="e1e3020d-901d-4649-9e94-c5c0a4cc523d" containerID="65e190912c6d7142d01553a587f58e32095a3f893daa4d06beb98e431777939c" exitCode=0 Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.362336 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r8ddf" event={"ID":"e1e3020d-901d-4649-9e94-c5c0a4cc523d","Type":"ContainerDied","Data":"65e190912c6d7142d01553a587f58e32095a3f893daa4d06beb98e431777939c"} Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.364577 5012 generic.go:334] "Generic (PLEG): container finished" podID="90a75d3b-186a-41d6-92a8-94729c520aa5" containerID="0ba4832ef5cde65c22a33ecfff620cd13c71e947e2063a45381a8045e3407918" exitCode=0 Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.364633 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-91bd-account-create-update-54r7l" event={"ID":"90a75d3b-186a-41d6-92a8-94729c520aa5","Type":"ContainerDied","Data":"0ba4832ef5cde65c22a33ecfff620cd13c71e947e2063a45381a8045e3407918"} Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.366104 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-22e2-account-create-update-vddht" event={"ID":"d1e7d95a-d78a-4d54-a66b-565114b4823e","Type":"ContainerStarted","Data":"415948d38f4e1e498d619eb6a6b2469946c3c60046c19bbad1963803a9a9ee0e"} Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.367868 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b5f0-account-create-update-l7b8m" event={"ID":"533d4699-332c-4ceb-ad6e-77c680699214","Type":"ContainerStarted","Data":"6000ce41874befebf1b7c7cc7cbf4ce7340ce07971239d672500e2598326f86a"} Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.368855 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-vjzm9" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.369724 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"3c628866-f96d-4e7b-8846-7073c98dd389","Type":"ContainerStarted","Data":"39447df96b54f1be84a97ec4a361863f1bba8e92bceec140937b025ac768a708"} Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.386658 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.447602 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-jktc7" podStartSLOduration=2.447580962 podStartE2EDuration="2.447580962s" podCreationTimestamp="2026-02-19 05:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:41:57.443079138 +0000 UTC m=+1013.476401717" watchObservedRunningTime="2026-02-19 05:41:57.447580962 +0000 UTC m=+1013.480903531" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.473288 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-22e2-account-create-update-vddht" podStartSLOduration=2.473265327 podStartE2EDuration="2.473265327s" podCreationTimestamp="2026-02-19 05:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:41:57.460218879 +0000 UTC m=+1013.493541488" watchObservedRunningTime="2026-02-19 05:41:57.473265327 +0000 UTC m=+1013.506587906" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.860507 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.925078 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f7d487d45-bvz4n"] Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.925328 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" podUID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerName="dnsmasq-dns" containerID="cri-o://8f0dc1aa57e08411f9d0f619e65ecab31defd41e57bdd287ce850d95e5dc2423" gracePeriod=10 Feb 19 05:41:58 crc kubenswrapper[5012]: I0219 05:41:58.379873 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerStarted","Data":"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0"} Feb 19 05:41:58 crc kubenswrapper[5012]: I0219 05:41:58.381805 5012 generic.go:334] "Generic (PLEG): container finished" podID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerID="8f0dc1aa57e08411f9d0f619e65ecab31defd41e57bdd287ce850d95e5dc2423" exitCode=0 Feb 19 05:41:58 crc kubenswrapper[5012]: I0219 05:41:58.381870 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" event={"ID":"79e01828-7818-4fe8-bd3f-8d39e9bf939c","Type":"ContainerDied","Data":"8f0dc1aa57e08411f9d0f619e65ecab31defd41e57bdd287ce850d95e5dc2423"} Feb 19 05:41:58 crc kubenswrapper[5012]: I0219 05:41:58.383266 5012 generic.go:334] "Generic (PLEG): container finished" podID="12f3008a-413a-4fe7-b3c1-773c10b6b2bf" containerID="c98bff27bc9812d723f9217b691c091425289e0f299460c4c4e1c7163b359d43" exitCode=0 Feb 19 05:41:58 crc kubenswrapper[5012]: I0219 05:41:58.383328 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jktc7" event={"ID":"12f3008a-413a-4fe7-b3c1-773c10b6b2bf","Type":"ContainerDied","Data":"c98bff27bc9812d723f9217b691c091425289e0f299460c4c4e1c7163b359d43"} Feb 19 05:41:58 crc kubenswrapper[5012]: I0219 05:41:58.385603 5012 generic.go:334] "Generic (PLEG): container finished" podID="d1e7d95a-d78a-4d54-a66b-565114b4823e" containerID="e1dc1ea6e87e48e7096bcfb12892dc9ac8929ba2984948549033f17095a5c4d5" exitCode=0 Feb 19 05:41:58 crc kubenswrapper[5012]: I0219 05:41:58.385773 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-22e2-account-create-update-vddht" event={"ID":"d1e7d95a-d78a-4d54-a66b-565114b4823e","Type":"ContainerDied","Data":"e1dc1ea6e87e48e7096bcfb12892dc9ac8929ba2984948549033f17095a5c4d5"} Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.311964 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-d9g9k"] Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.315977 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.319955 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.323149 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d9g9k"] Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.396331 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff1c217f-b6fa-482c-ad1b-5168cb882283-operator-scripts\") pod \"root-account-create-update-d9g9k\" (UID: \"ff1c217f-b6fa-482c-ad1b-5168cb882283\") " pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.396456 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blfg6\" (UniqueName: \"kubernetes.io/projected/ff1c217f-b6fa-482c-ad1b-5168cb882283-kube-api-access-blfg6\") pod \"root-account-create-update-d9g9k\" (UID: \"ff1c217f-b6fa-482c-ad1b-5168cb882283\") " pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.501211 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blfg6\" (UniqueName: \"kubernetes.io/projected/ff1c217f-b6fa-482c-ad1b-5168cb882283-kube-api-access-blfg6\") pod \"root-account-create-update-d9g9k\" (UID: \"ff1c217f-b6fa-482c-ad1b-5168cb882283\") " pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.501400 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff1c217f-b6fa-482c-ad1b-5168cb882283-operator-scripts\") pod \"root-account-create-update-d9g9k\" (UID: \"ff1c217f-b6fa-482c-ad1b-5168cb882283\") " pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.502422 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff1c217f-b6fa-482c-ad1b-5168cb882283-operator-scripts\") pod \"root-account-create-update-d9g9k\" (UID: \"ff1c217f-b6fa-482c-ad1b-5168cb882283\") " pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.532685 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blfg6\" (UniqueName: \"kubernetes.io/projected/ff1c217f-b6fa-482c-ad1b-5168cb882283-kube-api-access-blfg6\") pod \"root-account-create-update-d9g9k\" (UID: \"ff1c217f-b6fa-482c-ad1b-5168cb882283\") " pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.641224 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.440796 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-91bd-account-create-update-54r7l" event={"ID":"90a75d3b-186a-41d6-92a8-94729c520aa5","Type":"ContainerDied","Data":"2cc8742fd7eb09f99450e71f46c7d9913eee7444573c215e9049c6c4deb3c4af"} Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.441078 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cc8742fd7eb09f99450e71f46c7d9913eee7444573c215e9049c6c4deb3c4af" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.452660 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-22e2-account-create-update-vddht" event={"ID":"d1e7d95a-d78a-4d54-a66b-565114b4823e","Type":"ContainerDied","Data":"415948d38f4e1e498d619eb6a6b2469946c3c60046c19bbad1963803a9a9ee0e"} Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.452687 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="415948d38f4e1e498d619eb6a6b2469946c3c60046c19bbad1963803a9a9ee0e" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.461883 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hthfx" event={"ID":"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a","Type":"ContainerDied","Data":"32ae99cc9db8c5e5c207480404d461eddd26622f90d7889b9a998d9df04ee55b"} Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.461927 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32ae99cc9db8c5e5c207480404d461eddd26622f90d7889b9a998d9df04ee55b" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.507151 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" event={"ID":"79e01828-7818-4fe8-bd3f-8d39e9bf939c","Type":"ContainerDied","Data":"4cb41e822d4dbb1861f13461a8bcb5e410e5b409d268141b4a6e8e97a369da40"} Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.507187 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cb41e822d4dbb1861f13461a8bcb5e410e5b409d268141b4a6e8e97a369da40" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.523473 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jktc7" event={"ID":"12f3008a-413a-4fe7-b3c1-773c10b6b2bf","Type":"ContainerDied","Data":"7eb12edfddf61f27034bd898f26189ecda10cab4ae6f1560fd50a310988165c4"} Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.523771 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eb12edfddf61f27034bd898f26189ecda10cab4ae6f1560fd50a310988165c4" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.538512 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hthfx" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.539771 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r8ddf" event={"ID":"e1e3020d-901d-4649-9e94-c5c0a4cc523d","Type":"ContainerDied","Data":"41cb74d66ab3e64634057788877cc78c4b6583899219dd015cbf188304216e08"} Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.539805 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41cb74d66ab3e64634057788877cc78c4b6583899219dd015cbf188304216e08" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.540595 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.641461 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt2lm\" (UniqueName: \"kubernetes.io/projected/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-kube-api-access-nt2lm\") pod \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\" (UID: \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.641783 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-config\") pod \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.641846 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcg6n\" (UniqueName: \"kubernetes.io/projected/79e01828-7818-4fe8-bd3f-8d39e9bf939c-kube-api-access-mcg6n\") pod \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.641871 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-operator-scripts\") pod \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\" (UID: \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.641899 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-dns-svc\") pod \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.642637 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a" (UID: "6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.649414 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e01828-7818-4fe8-bd3f-8d39e9bf939c-kube-api-access-mcg6n" (OuterVolumeSpecName: "kube-api-access-mcg6n") pod "79e01828-7818-4fe8-bd3f-8d39e9bf939c" (UID: "79e01828-7818-4fe8-bd3f-8d39e9bf939c"). InnerVolumeSpecName "kube-api-access-mcg6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.649468 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-kube-api-access-nt2lm" (OuterVolumeSpecName: "kube-api-access-nt2lm") pod "6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a" (UID: "6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a"). InnerVolumeSpecName "kube-api-access-nt2lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.689476 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-config" (OuterVolumeSpecName: "config") pod "79e01828-7818-4fe8-bd3f-8d39e9bf939c" (UID: "79e01828-7818-4fe8-bd3f-8d39e9bf939c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.690090 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79e01828-7818-4fe8-bd3f-8d39e9bf939c" (UID: "79e01828-7818-4fe8-bd3f-8d39e9bf939c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.743720 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt2lm\" (UniqueName: \"kubernetes.io/projected/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-kube-api-access-nt2lm\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.743749 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.743788 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcg6n\" (UniqueName: \"kubernetes.io/projected/79e01828-7818-4fe8-bd3f-8d39e9bf939c-kube-api-access-mcg6n\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.743800 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.743811 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.787376 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.813051 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r8ddf" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.828414 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jktc7" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.858389 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.947150 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sttkj\" (UniqueName: \"kubernetes.io/projected/e1e3020d-901d-4649-9e94-c5c0a4cc523d-kube-api-access-sttkj\") pod \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\" (UID: \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.947258 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28csb\" (UniqueName: \"kubernetes.io/projected/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-kube-api-access-28csb\") pod \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\" (UID: \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.947336 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e3020d-901d-4649-9e94-c5c0a4cc523d-operator-scripts\") pod \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\" (UID: \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.947362 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mplk\" (UniqueName: \"kubernetes.io/projected/90a75d3b-186a-41d6-92a8-94729c520aa5-kube-api-access-4mplk\") pod \"90a75d3b-186a-41d6-92a8-94729c520aa5\" (UID: \"90a75d3b-186a-41d6-92a8-94729c520aa5\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.947425 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a75d3b-186a-41d6-92a8-94729c520aa5-operator-scripts\") pod \"90a75d3b-186a-41d6-92a8-94729c520aa5\" (UID: \"90a75d3b-186a-41d6-92a8-94729c520aa5\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.947534 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-operator-scripts\") pod \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\" (UID: \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.948640 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e3020d-901d-4649-9e94-c5c0a4cc523d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1e3020d-901d-4649-9e94-c5c0a4cc523d" (UID: "e1e3020d-901d-4649-9e94-c5c0a4cc523d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.948698 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90a75d3b-186a-41d6-92a8-94729c520aa5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90a75d3b-186a-41d6-92a8-94729c520aa5" (UID: "90a75d3b-186a-41d6-92a8-94729c520aa5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.948712 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12f3008a-413a-4fe7-b3c1-773c10b6b2bf" (UID: "12f3008a-413a-4fe7-b3c1-773c10b6b2bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.952613 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a75d3b-186a-41d6-92a8-94729c520aa5-kube-api-access-4mplk" (OuterVolumeSpecName: "kube-api-access-4mplk") pod "90a75d3b-186a-41d6-92a8-94729c520aa5" (UID: "90a75d3b-186a-41d6-92a8-94729c520aa5"). InnerVolumeSpecName "kube-api-access-4mplk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.952728 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-kube-api-access-28csb" (OuterVolumeSpecName: "kube-api-access-28csb") pod "12f3008a-413a-4fe7-b3c1-773c10b6b2bf" (UID: "12f3008a-413a-4fe7-b3c1-773c10b6b2bf"). InnerVolumeSpecName "kube-api-access-28csb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.953486 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e3020d-901d-4649-9e94-c5c0a4cc523d-kube-api-access-sttkj" (OuterVolumeSpecName: "kube-api-access-sttkj") pod "e1e3020d-901d-4649-9e94-c5c0a4cc523d" (UID: "e1e3020d-901d-4649-9e94-c5c0a4cc523d"). InnerVolumeSpecName "kube-api-access-sttkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.022174 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-34a7-account-create-update-84f2g"] Feb 19 05:42:03 crc kubenswrapper[5012]: W0219 05:42:03.028070 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1c217f_b6fa_482c_ad1b_5168cb882283.slice/crio-975b4531e6ccb72f2f56ccef48caa1ef5291f994f247b8d40940642b67930d0b WatchSource:0}: Error finding container 975b4531e6ccb72f2f56ccef48caa1ef5291f994f247b8d40940642b67930d0b: Status 404 returned error can't find the container with id 975b4531e6ccb72f2f56ccef48caa1ef5291f994f247b8d40940642b67930d0b Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.031576 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d9g9k"] Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.048731 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kq4b\" (UniqueName: \"kubernetes.io/projected/d1e7d95a-d78a-4d54-a66b-565114b4823e-kube-api-access-5kq4b\") pod \"d1e7d95a-d78a-4d54-a66b-565114b4823e\" (UID: \"d1e7d95a-d78a-4d54-a66b-565114b4823e\") " Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.048813 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e7d95a-d78a-4d54-a66b-565114b4823e-operator-scripts\") pod \"d1e7d95a-d78a-4d54-a66b-565114b4823e\" (UID: \"d1e7d95a-d78a-4d54-a66b-565114b4823e\") " Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.049454 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1e7d95a-d78a-4d54-a66b-565114b4823e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1e7d95a-d78a-4d54-a66b-565114b4823e" (UID: "d1e7d95a-d78a-4d54-a66b-565114b4823e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.049774 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e3020d-901d-4649-9e94-c5c0a4cc523d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.049793 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mplk\" (UniqueName: \"kubernetes.io/projected/90a75d3b-186a-41d6-92a8-94729c520aa5-kube-api-access-4mplk\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.049819 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e7d95a-d78a-4d54-a66b-565114b4823e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.049831 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a75d3b-186a-41d6-92a8-94729c520aa5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.049839 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.049849 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sttkj\" (UniqueName: \"kubernetes.io/projected/e1e3020d-901d-4649-9e94-c5c0a4cc523d-kube-api-access-sttkj\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.049857 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28csb\" (UniqueName: \"kubernetes.io/projected/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-kube-api-access-28csb\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.051472 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1e7d95a-d78a-4d54-a66b-565114b4823e-kube-api-access-5kq4b" (OuterVolumeSpecName: "kube-api-access-5kq4b") pod "d1e7d95a-d78a-4d54-a66b-565114b4823e" (UID: "d1e7d95a-d78a-4d54-a66b-565114b4823e"). InnerVolumeSpecName "kube-api-access-5kq4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.116426 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-vjzm9"] Feb 19 05:42:03 crc kubenswrapper[5012]: W0219 05:42:03.124358 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda973520b_997d_4c23_a056_590c96123e43.slice/crio-314360f6e925c772631000a7ca09cdf8d9f366b9b615304a27d678fd6c7a2d70 WatchSource:0}: Error finding container 314360f6e925c772631000a7ca09cdf8d9f366b9b615304a27d678fd6c7a2d70: Status 404 returned error can't find the container with id 314360f6e925c772631000a7ca09cdf8d9f366b9b615304a27d678fd6c7a2d70 Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.152093 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kq4b\" (UniqueName: \"kubernetes.io/projected/d1e7d95a-d78a-4d54-a66b-565114b4823e-kube-api-access-5kq4b\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.547854 5012 generic.go:334] "Generic (PLEG): container finished" podID="ff1c217f-b6fa-482c-ad1b-5168cb882283" containerID="abc0139cac003d44d29c14053f3981b5bda18d4f49ee4f01ff970a93700f4fc7" exitCode=0 Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.547917 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d9g9k" event={"ID":"ff1c217f-b6fa-482c-ad1b-5168cb882283","Type":"ContainerDied","Data":"abc0139cac003d44d29c14053f3981b5bda18d4f49ee4f01ff970a93700f4fc7"} Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.547943 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d9g9k" event={"ID":"ff1c217f-b6fa-482c-ad1b-5168cb882283","Type":"ContainerStarted","Data":"975b4531e6ccb72f2f56ccef48caa1ef5291f994f247b8d40940642b67930d0b"} Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.551335 5012 generic.go:334] "Generic (PLEG): container finished" podID="6e45e098-f689-4015-9871-5f66e5d7bef1" containerID="0da1732600a370cfbfe77664995408f2ab300c5ef7fcf22ab0fd4f379bf54473" exitCode=0 Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.551376 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-34a7-account-create-update-84f2g" event={"ID":"6e45e098-f689-4015-9871-5f66e5d7bef1","Type":"ContainerDied","Data":"0da1732600a370cfbfe77664995408f2ab300c5ef7fcf22ab0fd4f379bf54473"} Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.551401 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-34a7-account-create-update-84f2g" event={"ID":"6e45e098-f689-4015-9871-5f66e5d7bef1","Type":"ContainerStarted","Data":"be5862dc6f34b983db201be5afc0571b16d829c3169057121e4b42ea84e0b0c6"} Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.553180 5012 generic.go:334] "Generic (PLEG): container finished" podID="a973520b-997d-4c23-a056-590c96123e43" containerID="6cb45a4049590e4fb7d60e94e092be98bdb1a162fc286f8a8013620e8c330260" exitCode=0 Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.553232 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-vjzm9" event={"ID":"a973520b-997d-4c23-a056-590c96123e43","Type":"ContainerDied","Data":"6cb45a4049590e4fb7d60e94e092be98bdb1a162fc286f8a8013620e8c330260"} Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.553248 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-vjzm9" event={"ID":"a973520b-997d-4c23-a056-590c96123e43","Type":"ContainerStarted","Data":"314360f6e925c772631000a7ca09cdf8d9f366b9b615304a27d678fd6c7a2d70"} Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.555458 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5vxhd" event={"ID":"d05da3bc-6c22-4956-9fab-331eed79d175","Type":"ContainerStarted","Data":"c2e46f5b1d6014395f2cc0ca721ce5c1df3a8f677de34c3d64f89b616ca2d967"} Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.558545 5012 generic.go:334] "Generic (PLEG): container finished" podID="533d4699-332c-4ceb-ad6e-77c680699214" containerID="8d4101d8165775d3c785f3ad562d7ef71806f55866410c4f9e87581c5430851f" exitCode=0 Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.558635 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r8ddf" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.558645 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.558661 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.558679 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b5f0-account-create-update-l7b8m" event={"ID":"533d4699-332c-4ceb-ad6e-77c680699214","Type":"ContainerDied","Data":"8d4101d8165775d3c785f3ad562d7ef71806f55866410c4f9e87581c5430851f"} Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.558704 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jktc7" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.558727 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.558731 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hthfx" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.599592 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-5vxhd" podStartSLOduration=5.444478055 podStartE2EDuration="12.59957206s" podCreationTimestamp="2026-02-19 05:41:51 +0000 UTC" firstStartedPulling="2026-02-19 05:41:55.294774343 +0000 UTC m=+1011.328096912" lastFinishedPulling="2026-02-19 05:42:02.449868348 +0000 UTC m=+1018.483190917" observedRunningTime="2026-02-19 05:42:03.593137668 +0000 UTC m=+1019.626460237" watchObservedRunningTime="2026-02-19 05:42:03.59957206 +0000 UTC m=+1019.632894629" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.736033 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f7d487d45-bvz4n"] Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.745282 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f7d487d45-bvz4n"] Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.871796 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:42:03 crc kubenswrapper[5012]: E0219 05:42:03.880246 5012 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 05:42:03 crc kubenswrapper[5012]: E0219 05:42:03.880289 5012 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 05:42:03 crc kubenswrapper[5012]: E0219 05:42:03.880382 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift podName:c089afc3-1655-4675-b4e1-a62ec6929498 nodeName:}" failed. No retries permitted until 2026-02-19 05:42:19.880354536 +0000 UTC m=+1035.913677105 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift") pod "swift-storage-0" (UID: "c089afc3-1655-4675-b4e1-a62ec6929498") : configmap "swift-ring-files" not found Feb 19 05:42:04 crc kubenswrapper[5012]: I0219 05:42:04.719181 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" path="/var/lib/kubelet/pods/79e01828-7818-4fe8-bd3f-8d39e9bf939c/volumes" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.133588 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" podUID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.103:5353: i/o timeout" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.224388 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.234065 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-vjzm9" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251380 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-24p82"] Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251698 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251734 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerName="init" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251746 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerName="init" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251756 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251762 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251778 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerName="dnsmasq-dns" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251785 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerName="dnsmasq-dns" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251795 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a75d3b-186a-41d6-92a8-94729c520aa5" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251801 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a75d3b-186a-41d6-92a8-94729c520aa5" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251814 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533d4699-332c-4ceb-ad6e-77c680699214" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251820 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="533d4699-332c-4ceb-ad6e-77c680699214" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251832 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f3008a-413a-4fe7-b3c1-773c10b6b2bf" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251838 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f3008a-413a-4fe7-b3c1-773c10b6b2bf" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251845 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1e7d95a-d78a-4d54-a66b-565114b4823e" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251851 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e7d95a-d78a-4d54-a66b-565114b4823e" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251862 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a973520b-997d-4c23-a056-590c96123e43" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251870 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a973520b-997d-4c23-a056-590c96123e43" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251880 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e3020d-901d-4649-9e94-c5c0a4cc523d" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251885 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e3020d-901d-4649-9e94-c5c0a4cc523d" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252034 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="533d4699-332c-4ceb-ad6e-77c680699214" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252048 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1e7d95a-d78a-4d54-a66b-565114b4823e" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252060 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerName="dnsmasq-dns" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252069 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252082 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e3020d-901d-4649-9e94-c5c0a4cc523d" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252089 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="12f3008a-413a-4fe7-b3c1-773c10b6b2bf" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252096 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a973520b-997d-4c23-a056-590c96123e43" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252107 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a75d3b-186a-41d6-92a8-94729c520aa5" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252118 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1c217f-b6fa-482c-ad1b-5168cb882283" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252664 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.255769 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.255811 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pmvmf" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.256926 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.312809 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld6zd\" (UniqueName: \"kubernetes.io/projected/6e45e098-f689-4015-9871-5f66e5d7bef1-kube-api-access-ld6zd\") pod \"6e45e098-f689-4015-9871-5f66e5d7bef1\" (UID: \"6e45e098-f689-4015-9871-5f66e5d7bef1\") " Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.312951 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/533d4699-332c-4ceb-ad6e-77c680699214-operator-scripts\") pod \"533d4699-332c-4ceb-ad6e-77c680699214\" (UID: \"533d4699-332c-4ceb-ad6e-77c680699214\") " Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313043 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l9wv\" (UniqueName: \"kubernetes.io/projected/a973520b-997d-4c23-a056-590c96123e43-kube-api-access-4l9wv\") pod \"a973520b-997d-4c23-a056-590c96123e43\" (UID: \"a973520b-997d-4c23-a056-590c96123e43\") " Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313086 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqkd8\" (UniqueName: \"kubernetes.io/projected/533d4699-332c-4ceb-ad6e-77c680699214-kube-api-access-fqkd8\") pod \"533d4699-332c-4ceb-ad6e-77c680699214\" (UID: \"533d4699-332c-4ceb-ad6e-77c680699214\") " Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313102 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a973520b-997d-4c23-a056-590c96123e43-operator-scripts\") pod \"a973520b-997d-4c23-a056-590c96123e43\" (UID: \"a973520b-997d-4c23-a056-590c96123e43\") " Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313147 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e45e098-f689-4015-9871-5f66e5d7bef1-operator-scripts\") pod \"6e45e098-f689-4015-9871-5f66e5d7bef1\" (UID: \"6e45e098-f689-4015-9871-5f66e5d7bef1\") " Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313162 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blfg6\" (UniqueName: \"kubernetes.io/projected/ff1c217f-b6fa-482c-ad1b-5168cb882283-kube-api-access-blfg6\") pod \"ff1c217f-b6fa-482c-ad1b-5168cb882283\" (UID: \"ff1c217f-b6fa-482c-ad1b-5168cb882283\") " Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313206 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff1c217f-b6fa-482c-ad1b-5168cb882283-operator-scripts\") pod \"ff1c217f-b6fa-482c-ad1b-5168cb882283\" (UID: \"ff1c217f-b6fa-482c-ad1b-5168cb882283\") " Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313399 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn8l8\" (UniqueName: \"kubernetes.io/projected/31d56d90-ce06-4de3-9edb-2092780e9afe-kube-api-access-kn8l8\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313460 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-config-data\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313485 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-db-sync-config-data\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313513 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-combined-ca-bundle\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313544 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/533d4699-332c-4ceb-ad6e-77c680699214-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "533d4699-332c-4ceb-ad6e-77c680699214" (UID: "533d4699-332c-4ceb-ad6e-77c680699214"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313577 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a973520b-997d-4c23-a056-590c96123e43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a973520b-997d-4c23-a056-590c96123e43" (UID: "a973520b-997d-4c23-a056-590c96123e43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.314208 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e45e098-f689-4015-9871-5f66e5d7bef1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e45e098-f689-4015-9871-5f66e5d7bef1" (UID: "6e45e098-f689-4015-9871-5f66e5d7bef1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.314653 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff1c217f-b6fa-482c-ad1b-5168cb882283-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff1c217f-b6fa-482c-ad1b-5168cb882283" (UID: "ff1c217f-b6fa-482c-ad1b-5168cb882283"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.315639 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/533d4699-332c-4ceb-ad6e-77c680699214-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.315656 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a973520b-997d-4c23-a056-590c96123e43-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.315666 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e45e098-f689-4015-9871-5f66e5d7bef1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.315676 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff1c217f-b6fa-482c-ad1b-5168cb882283-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.321062 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533d4699-332c-4ceb-ad6e-77c680699214-kube-api-access-fqkd8" (OuterVolumeSpecName: "kube-api-access-fqkd8") pod "533d4699-332c-4ceb-ad6e-77c680699214" (UID: "533d4699-332c-4ceb-ad6e-77c680699214"). InnerVolumeSpecName "kube-api-access-fqkd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.322566 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-24p82"] Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.323387 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1c217f-b6fa-482c-ad1b-5168cb882283-kube-api-access-blfg6" (OuterVolumeSpecName: "kube-api-access-blfg6") pod "ff1c217f-b6fa-482c-ad1b-5168cb882283" (UID: "ff1c217f-b6fa-482c-ad1b-5168cb882283"). InnerVolumeSpecName "kube-api-access-blfg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.324603 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a973520b-997d-4c23-a056-590c96123e43-kube-api-access-4l9wv" (OuterVolumeSpecName: "kube-api-access-4l9wv") pod "a973520b-997d-4c23-a056-590c96123e43" (UID: "a973520b-997d-4c23-a056-590c96123e43"). InnerVolumeSpecName "kube-api-access-4l9wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.333976 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e45e098-f689-4015-9871-5f66e5d7bef1-kube-api-access-ld6zd" (OuterVolumeSpecName: "kube-api-access-ld6zd") pod "6e45e098-f689-4015-9871-5f66e5d7bef1" (UID: "6e45e098-f689-4015-9871-5f66e5d7bef1"). InnerVolumeSpecName "kube-api-access-ld6zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.417366 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn8l8\" (UniqueName: \"kubernetes.io/projected/31d56d90-ce06-4de3-9edb-2092780e9afe-kube-api-access-kn8l8\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.417439 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-config-data\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.417466 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-db-sync-config-data\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.417497 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-combined-ca-bundle\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.417576 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l9wv\" (UniqueName: \"kubernetes.io/projected/a973520b-997d-4c23-a056-590c96123e43-kube-api-access-4l9wv\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.417587 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqkd8\" (UniqueName: \"kubernetes.io/projected/533d4699-332c-4ceb-ad6e-77c680699214-kube-api-access-fqkd8\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.417596 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blfg6\" (UniqueName: \"kubernetes.io/projected/ff1c217f-b6fa-482c-ad1b-5168cb882283-kube-api-access-blfg6\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.417606 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld6zd\" (UniqueName: \"kubernetes.io/projected/6e45e098-f689-4015-9871-5f66e5d7bef1-kube-api-access-ld6zd\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.432808 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn8l8\" (UniqueName: \"kubernetes.io/projected/31d56d90-ce06-4de3-9edb-2092780e9afe-kube-api-access-kn8l8\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.432863 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-combined-ca-bundle\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.433001 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-config-data\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.440440 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-db-sync-config-data\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.579838 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.580527 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.580521 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b5f0-account-create-update-l7b8m" event={"ID":"533d4699-332c-4ceb-ad6e-77c680699214","Type":"ContainerDied","Data":"6000ce41874befebf1b7c7cc7cbf4ce7340ce07971239d672500e2598326f86a"} Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.580681 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6000ce41874befebf1b7c7cc7cbf4ce7340ce07971239d672500e2598326f86a" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.582224 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d9g9k" event={"ID":"ff1c217f-b6fa-482c-ad1b-5168cb882283","Type":"ContainerDied","Data":"975b4531e6ccb72f2f56ccef48caa1ef5291f994f247b8d40940642b67930d0b"} Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.582252 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.582254 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="975b4531e6ccb72f2f56ccef48caa1ef5291f994f247b8d40940642b67930d0b" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.585701 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-34a7-account-create-update-84f2g" event={"ID":"6e45e098-f689-4015-9871-5f66e5d7bef1","Type":"ContainerDied","Data":"be5862dc6f34b983db201be5afc0571b16d829c3169057121e4b42ea84e0b0c6"} Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.585724 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be5862dc6f34b983db201be5afc0571b16d829c3169057121e4b42ea84e0b0c6" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.585712 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.587139 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-vjzm9" event={"ID":"a973520b-997d-4c23-a056-590c96123e43","Type":"ContainerDied","Data":"314360f6e925c772631000a7ca09cdf8d9f366b9b615304a27d678fd6c7a2d70"} Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.587162 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="314360f6e925c772631000a7ca09cdf8d9f366b9b615304a27d678fd6c7a2d70" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.587168 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-vjzm9" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.589971 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerStarted","Data":"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f"} Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.616503 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.027395032 podStartE2EDuration="59.616486123s" podCreationTimestamp="2026-02-19 05:41:06 +0000 UTC" firstStartedPulling="2026-02-19 05:41:08.667585171 +0000 UTC m=+964.700907740" lastFinishedPulling="2026-02-19 05:42:05.256676262 +0000 UTC m=+1021.289998831" observedRunningTime="2026-02-19 05:42:05.61317292 +0000 UTC m=+1021.646495489" watchObservedRunningTime="2026-02-19 05:42:05.616486123 +0000 UTC m=+1021.649808692" Feb 19 05:42:06 crc kubenswrapper[5012]: I0219 05:42:06.025950 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-24p82"] Feb 19 05:42:06 crc kubenswrapper[5012]: W0219 05:42:06.026683 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31d56d90_ce06_4de3_9edb_2092780e9afe.slice/crio-aa61f86cc8d1c9a72406e1f686123f265fff57ea31daf565ee1da1a7dabb6d3f WatchSource:0}: Error finding container aa61f86cc8d1c9a72406e1f686123f265fff57ea31daf565ee1da1a7dabb6d3f: Status 404 returned error can't find the container with id aa61f86cc8d1c9a72406e1f686123f265fff57ea31daf565ee1da1a7dabb6d3f Feb 19 05:42:06 crc kubenswrapper[5012]: I0219 05:42:06.600613 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-24p82" event={"ID":"31d56d90-ce06-4de3-9edb-2092780e9afe","Type":"ContainerStarted","Data":"aa61f86cc8d1c9a72406e1f686123f265fff57ea31daf565ee1da1a7dabb6d3f"} Feb 19 05:42:07 crc kubenswrapper[5012]: I0219 05:42:07.755432 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-d9g9k"] Feb 19 05:42:07 crc kubenswrapper[5012]: I0219 05:42:07.761999 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-d9g9k"] Feb 19 05:42:07 crc kubenswrapper[5012]: I0219 05:42:07.977862 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:07 crc kubenswrapper[5012]: I0219 05:42:07.979003 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:07 crc kubenswrapper[5012]: I0219 05:42:07.982079 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:08 crc kubenswrapper[5012]: I0219 05:42:08.053639 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 19 05:42:08 crc kubenswrapper[5012]: I0219 05:42:08.617273 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:08 crc kubenswrapper[5012]: I0219 05:42:08.724462 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff1c217f-b6fa-482c-ad1b-5168cb882283" path="/var/lib/kubelet/pods/ff1c217f-b6fa-482c-ad1b-5168cb882283/volumes" Feb 19 05:42:11 crc kubenswrapper[5012]: I0219 05:42:11.652050 5012 generic.go:334] "Generic (PLEG): container finished" podID="b0095712-262e-4562-afac-0f2f4372224d" containerID="1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f" exitCode=0 Feb 19 05:42:11 crc kubenswrapper[5012]: I0219 05:42:11.652126 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0095712-262e-4562-afac-0f2f4372224d","Type":"ContainerDied","Data":"1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f"} Feb 19 05:42:11 crc kubenswrapper[5012]: I0219 05:42:11.654569 5012 generic.go:334] "Generic (PLEG): container finished" podID="d05da3bc-6c22-4956-9fab-331eed79d175" containerID="c2e46f5b1d6014395f2cc0ca721ce5c1df3a8f677de34c3d64f89b616ca2d967" exitCode=0 Feb 19 05:42:11 crc kubenswrapper[5012]: I0219 05:42:11.654597 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5vxhd" event={"ID":"d05da3bc-6c22-4956-9fab-331eed79d175","Type":"ContainerDied","Data":"c2e46f5b1d6014395f2cc0ca721ce5c1df3a8f677de34c3d64f89b616ca2d967"} Feb 19 05:42:11 crc kubenswrapper[5012]: I0219 05:42:11.861884 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:42:11 crc kubenswrapper[5012]: I0219 05:42:11.862366 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="prometheus" containerID="cri-o://a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c" gracePeriod=600 Feb 19 05:42:11 crc kubenswrapper[5012]: I0219 05:42:11.862572 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="thanos-sidecar" containerID="cri-o://e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f" gracePeriod=600 Feb 19 05:42:11 crc kubenswrapper[5012]: I0219 05:42:11.862678 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="config-reloader" containerID="cri-o://7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0" gracePeriod=600 Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.330224 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.478293 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-web-config\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.478829 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e31edbd-c20b-420d-8888-cafb392410cd-config-out\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.478880 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-2\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.479059 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.479113 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-config\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.479139 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-1\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.479157 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-thanos-prometheus-http-client-file\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.479195 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s694\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-kube-api-access-7s694\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.479218 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-0\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.479237 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-tls-assets\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.479542 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.481048 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.481130 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.483499 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e31edbd-c20b-420d-8888-cafb392410cd-config-out" (OuterVolumeSpecName: "config-out") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.483648 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.486560 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-kube-api-access-7s694" (OuterVolumeSpecName: "kube-api-access-7s694") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "kube-api-access-7s694". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.489737 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-config" (OuterVolumeSpecName: "config") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.490029 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.503230 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "pvc-7fbf442c-c467-48a5-9a2f-86a74d778584". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.516559 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-web-config" (OuterVolumeSpecName: "web-config") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585040 5012 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e31edbd-c20b-420d-8888-cafb392410cd-config-out\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585081 5012 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585120 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") on node \"crc\" " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585132 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585142 5012 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585152 5012 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585162 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s694\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-kube-api-access-7s694\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585171 5012 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585179 5012 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585188 5012 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-web-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.605284 5012 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.605473 5012 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7fbf442c-c467-48a5-9a2f-86a74d778584" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584") on node "crc" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665282 5012 generic.go:334] "Generic (PLEG): container finished" podID="1e31edbd-c20b-420d-8888-cafb392410cd" containerID="e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f" exitCode=0 Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665336 5012 generic.go:334] "Generic (PLEG): container finished" podID="1e31edbd-c20b-420d-8888-cafb392410cd" containerID="7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0" exitCode=0 Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665343 5012 generic.go:334] "Generic (PLEG): container finished" podID="1e31edbd-c20b-420d-8888-cafb392410cd" containerID="a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c" exitCode=0 Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665344 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerDied","Data":"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f"} Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665395 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerDied","Data":"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0"} Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665410 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerDied","Data":"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c"} Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665419 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerDied","Data":"91314d71567782400d0673184328bab50c18185869b638d4949c49d81c11f6bb"} Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665435 5012 scope.go:117] "RemoveContainer" containerID="e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665501 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.674193 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0095712-262e-4562-afac-0f2f4372224d","Type":"ContainerStarted","Data":"dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32"} Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.675387 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.686953 5012 reconciler_common.go:293] "Volume detached for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.695855 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.744014692 podStartE2EDuration="1m13.695839525s" podCreationTimestamp="2026-02-19 05:40:59 +0000 UTC" firstStartedPulling="2026-02-19 05:41:01.452812258 +0000 UTC m=+957.486134827" lastFinishedPulling="2026-02-19 05:41:37.404637091 +0000 UTC m=+993.437959660" observedRunningTime="2026-02-19 05:42:12.693382333 +0000 UTC m=+1028.726704902" watchObservedRunningTime="2026-02-19 05:42:12.695839525 +0000 UTC m=+1028.729162094" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.747699 5012 scope.go:117] "RemoveContainer" containerID="7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.764686 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.778204 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.780455 5012 scope.go:117] "RemoveContainer" containerID="a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.804707 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.805664 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="thanos-sidecar" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.805685 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="thanos-sidecar" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.805710 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1c217f-b6fa-482c-ad1b-5168cb882283" containerName="mariadb-account-create-update" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.805717 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1c217f-b6fa-482c-ad1b-5168cb882283" containerName="mariadb-account-create-update" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.805728 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e45e098-f689-4015-9871-5f66e5d7bef1" containerName="mariadb-account-create-update" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.805734 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e45e098-f689-4015-9871-5f66e5d7bef1" containerName="mariadb-account-create-update" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.805744 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="init-config-reloader" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.805751 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="init-config-reloader" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.805762 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="prometheus" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.805769 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="prometheus" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.805783 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="config-reloader" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.805789 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="config-reloader" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.805932 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="config-reloader" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.806063 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="thanos-sidecar" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.806078 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e45e098-f689-4015-9871-5f66e5d7bef1" containerName="mariadb-account-create-update" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.806086 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="prometheus" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.807580 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.810661 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.810838 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.810957 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.811154 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.811257 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.811397 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.811808 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7bqtw" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.812339 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.818249 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.819581 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.832141 5012 scope.go:117] "RemoveContainer" containerID="2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.855778 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lj2kq"] Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.864404 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.869181 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.888345 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lj2kq"] Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895125 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq95l\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-kube-api-access-tq95l\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895177 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895199 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895229 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895251 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895286 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895317 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-config\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895357 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895401 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895426 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8509cc68-c35e-47ea-a634-896143d747ed-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895447 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895466 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895489 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.897035 5012 scope.go:117] "RemoveContainer" containerID="e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.897346 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f\": container with ID starting with e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f not found: ID does not exist" containerID="e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.897368 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f"} err="failed to get container status \"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f\": rpc error: code = NotFound desc = could not find container \"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f\": container with ID starting with e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.897387 5012 scope.go:117] "RemoveContainer" containerID="7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.913567 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0\": container with ID starting with 7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0 not found: ID does not exist" containerID="7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.913610 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0"} err="failed to get container status \"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0\": rpc error: code = NotFound desc = could not find container \"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0\": container with ID starting with 7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0 not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.913635 5012 scope.go:117] "RemoveContainer" containerID="a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.914514 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c\": container with ID starting with a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c not found: ID does not exist" containerID="a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.914554 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c"} err="failed to get container status \"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c\": rpc error: code = NotFound desc = could not find container \"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c\": container with ID starting with a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.914580 5012 scope.go:117] "RemoveContainer" containerID="2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.914852 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6\": container with ID starting with 2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6 not found: ID does not exist" containerID="2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.914878 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6"} err="failed to get container status \"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6\": rpc error: code = NotFound desc = could not find container \"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6\": container with ID starting with 2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6 not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.914893 5012 scope.go:117] "RemoveContainer" containerID="e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.915068 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f"} err="failed to get container status \"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f\": rpc error: code = NotFound desc = could not find container \"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f\": container with ID starting with e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.915088 5012 scope.go:117] "RemoveContainer" containerID="7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.915605 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0"} err="failed to get container status \"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0\": rpc error: code = NotFound desc = could not find container \"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0\": container with ID starting with 7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0 not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.915624 5012 scope.go:117] "RemoveContainer" containerID="a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.915792 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c"} err="failed to get container status \"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c\": rpc error: code = NotFound desc = could not find container \"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c\": container with ID starting with a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.915812 5012 scope.go:117] "RemoveContainer" containerID="2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.915988 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6"} err="failed to get container status \"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6\": rpc error: code = NotFound desc = could not find container \"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6\": container with ID starting with 2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6 not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.916008 5012 scope.go:117] "RemoveContainer" containerID="e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.916183 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f"} err="failed to get container status \"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f\": rpc error: code = NotFound desc = could not find container \"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f\": container with ID starting with e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.916203 5012 scope.go:117] "RemoveContainer" containerID="7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.916408 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0"} err="failed to get container status \"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0\": rpc error: code = NotFound desc = could not find container \"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0\": container with ID starting with 7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0 not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.916428 5012 scope.go:117] "RemoveContainer" containerID="a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.916587 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c"} err="failed to get container status \"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c\": rpc error: code = NotFound desc = could not find container \"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c\": container with ID starting with a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.916605 5012 scope.go:117] "RemoveContainer" containerID="2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.916776 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6"} err="failed to get container status \"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6\": rpc error: code = NotFound desc = could not find container \"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6\": container with ID starting with 2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6 not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.997776 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.997851 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq95l\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-kube-api-access-tq95l\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.997884 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.997923 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.997953 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.997973 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6zx4\" (UniqueName: \"kubernetes.io/projected/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-kube-api-access-c6zx4\") pod \"root-account-create-update-lj2kq\" (UID: \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\") " pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998014 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998085 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998113 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-config\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998134 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-operator-scripts\") pod \"root-account-create-update-lj2kq\" (UID: \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\") " pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998175 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998234 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998256 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8509cc68-c35e-47ea-a634-896143d747ed-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998283 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998319 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.999004 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.999062 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.999467 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.005145 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.005199 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.005737 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.006438 5012 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.006465 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/80266977aa18e8991458f1f7d5520b709fb21586520e915bbacb4bc2380e455f/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.008724 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.009153 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.010199 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8509cc68-c35e-47ea-a634-896143d747ed-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.010491 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-config\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.010955 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.014005 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq95l\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-kube-api-access-tq95l\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.033776 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.055483 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.099933 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-swiftconf\") pod \"d05da3bc-6c22-4956-9fab-331eed79d175\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.099971 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-scripts\") pod \"d05da3bc-6c22-4956-9fab-331eed79d175\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.100012 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-dispersionconf\") pod \"d05da3bc-6c22-4956-9fab-331eed79d175\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.100090 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-ring-data-devices\") pod \"d05da3bc-6c22-4956-9fab-331eed79d175\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.100127 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-combined-ca-bundle\") pod \"d05da3bc-6c22-4956-9fab-331eed79d175\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.100266 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szrf5\" (UniqueName: \"kubernetes.io/projected/d05da3bc-6c22-4956-9fab-331eed79d175-kube-api-access-szrf5\") pod \"d05da3bc-6c22-4956-9fab-331eed79d175\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.100313 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d05da3bc-6c22-4956-9fab-331eed79d175-etc-swift\") pod \"d05da3bc-6c22-4956-9fab-331eed79d175\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.100637 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6zx4\" (UniqueName: \"kubernetes.io/projected/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-kube-api-access-c6zx4\") pod \"root-account-create-update-lj2kq\" (UID: \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\") " pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.100695 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-operator-scripts\") pod \"root-account-create-update-lj2kq\" (UID: \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\") " pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.101657 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-operator-scripts\") pod \"root-account-create-update-lj2kq\" (UID: \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\") " pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.102038 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d05da3bc-6c22-4956-9fab-331eed79d175" (UID: "d05da3bc-6c22-4956-9fab-331eed79d175"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.106487 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d05da3bc-6c22-4956-9fab-331eed79d175-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d05da3bc-6c22-4956-9fab-331eed79d175" (UID: "d05da3bc-6c22-4956-9fab-331eed79d175"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.114857 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d05da3bc-6c22-4956-9fab-331eed79d175-kube-api-access-szrf5" (OuterVolumeSpecName: "kube-api-access-szrf5") pod "d05da3bc-6c22-4956-9fab-331eed79d175" (UID: "d05da3bc-6c22-4956-9fab-331eed79d175"). InnerVolumeSpecName "kube-api-access-szrf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.120618 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6zx4\" (UniqueName: \"kubernetes.io/projected/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-kube-api-access-c6zx4\") pod \"root-account-create-update-lj2kq\" (UID: \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\") " pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.120881 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d05da3bc-6c22-4956-9fab-331eed79d175" (UID: "d05da3bc-6c22-4956-9fab-331eed79d175"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.137484 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d05da3bc-6c22-4956-9fab-331eed79d175" (UID: "d05da3bc-6c22-4956-9fab-331eed79d175"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.141124 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.148672 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d05da3bc-6c22-4956-9fab-331eed79d175" (UID: "d05da3bc-6c22-4956-9fab-331eed79d175"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.149348 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-scripts" (OuterVolumeSpecName: "scripts") pod "d05da3bc-6c22-4956-9fab-331eed79d175" (UID: "d05da3bc-6c22-4956-9fab-331eed79d175"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.203084 5012 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.203112 5012 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.203121 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.203129 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szrf5\" (UniqueName: \"kubernetes.io/projected/d05da3bc-6c22-4956-9fab-331eed79d175-kube-api-access-szrf5\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.203139 5012 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d05da3bc-6c22-4956-9fab-331eed79d175-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.203146 5012 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.203154 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.226183 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.613029 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:42:13 crc kubenswrapper[5012]: W0219 05:42:13.623717 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8509cc68_c35e_47ea_a634_896143d747ed.slice/crio-c258d2d68f577aa99acf781abe70e8c1f0bea84a31b7c56b2eca30c2af015cb5 WatchSource:0}: Error finding container c258d2d68f577aa99acf781abe70e8c1f0bea84a31b7c56b2eca30c2af015cb5: Status 404 returned error can't find the container with id c258d2d68f577aa99acf781abe70e8c1f0bea84a31b7c56b2eca30c2af015cb5 Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.694802 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.699335 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5vxhd" event={"ID":"d05da3bc-6c22-4956-9fab-331eed79d175","Type":"ContainerDied","Data":"474f2d807b97d130f773ae47927296219b201d325e7ae32ec13971a56bf04456"} Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.699383 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="474f2d807b97d130f773ae47927296219b201d325e7ae32ec13971a56bf04456" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.717021 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerStarted","Data":"c258d2d68f577aa99acf781abe70e8c1f0bea84a31b7c56b2eca30c2af015cb5"} Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.729643 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lj2kq"] Feb 19 05:42:14 crc kubenswrapper[5012]: I0219 05:42:14.716359 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" path="/var/lib/kubelet/pods/1e31edbd-c20b-420d-8888-cafb392410cd/volumes" Feb 19 05:42:14 crc kubenswrapper[5012]: I0219 05:42:14.730218 5012 generic.go:334] "Generic (PLEG): container finished" podID="3c559b49-5b5e-435d-9a6a-66dd1d3cbc79" containerID="93e7f5c5600e832347781d221af700104ca8f39c9c057fb3a233ce4702cf409c" exitCode=0 Feb 19 05:42:14 crc kubenswrapper[5012]: I0219 05:42:14.730267 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lj2kq" event={"ID":"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79","Type":"ContainerDied","Data":"93e7f5c5600e832347781d221af700104ca8f39c9c057fb3a233ce4702cf409c"} Feb 19 05:42:14 crc kubenswrapper[5012]: I0219 05:42:14.730350 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lj2kq" event={"ID":"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79","Type":"ContainerStarted","Data":"768c114b539b2feebb6baf342756cce337f48a86e7b046dcbc36cac8568a33b9"} Feb 19 05:42:14 crc kubenswrapper[5012]: I0219 05:42:14.999041 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cr94m" podUID="e2c9ac17-43ef-4ccb-83b1-e20ee03289de" containerName="ovn-controller" probeResult="failure" output=< Feb 19 05:42:14 crc kubenswrapper[5012]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 05:42:14 crc kubenswrapper[5012]: > Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.012871 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.018917 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.354533 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cr94m-config-wnbbj"] Feb 19 05:42:15 crc kubenswrapper[5012]: E0219 05:42:15.354963 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d05da3bc-6c22-4956-9fab-331eed79d175" containerName="swift-ring-rebalance" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.354980 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d05da3bc-6c22-4956-9fab-331eed79d175" containerName="swift-ring-rebalance" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.355165 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d05da3bc-6c22-4956-9fab-331eed79d175" containerName="swift-ring-rebalance" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.355806 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.359845 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.361517 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cr94m-config-wnbbj"] Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.450901 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run-ovn\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.450974 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-scripts\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.451019 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-additional-scripts\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.451059 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9lnh\" (UniqueName: \"kubernetes.io/projected/ddbea515-c638-4619-8940-b23d173ceb8b-kube-api-access-s9lnh\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.451098 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-log-ovn\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.451162 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.552685 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run-ovn\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.552765 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-scripts\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.552795 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-additional-scripts\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.552823 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9lnh\" (UniqueName: \"kubernetes.io/projected/ddbea515-c638-4619-8940-b23d173ceb8b-kube-api-access-s9lnh\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.552855 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-log-ovn\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.552917 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.553048 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run-ovn\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.553078 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.553119 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-log-ovn\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.553789 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-additional-scripts\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.556220 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-scripts\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.580852 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9lnh\" (UniqueName: \"kubernetes.io/projected/ddbea515-c638-4619-8940-b23d173ceb8b-kube-api-access-s9lnh\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.741109 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.035797 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.063652 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6zx4\" (UniqueName: \"kubernetes.io/projected/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-kube-api-access-c6zx4\") pod \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\" (UID: \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\") " Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.063773 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-operator-scripts\") pod \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\" (UID: \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\") " Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.064809 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c559b49-5b5e-435d-9a6a-66dd1d3cbc79" (UID: "3c559b49-5b5e-435d-9a6a-66dd1d3cbc79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.073597 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-kube-api-access-c6zx4" (OuterVolumeSpecName: "kube-api-access-c6zx4") pod "3c559b49-5b5e-435d-9a6a-66dd1d3cbc79" (UID: "3c559b49-5b5e-435d-9a6a-66dd1d3cbc79"). InnerVolumeSpecName "kube-api-access-c6zx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.166378 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6zx4\" (UniqueName: \"kubernetes.io/projected/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-kube-api-access-c6zx4\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.166659 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.435519 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cr94m-config-wnbbj"] Feb 19 05:42:16 crc kubenswrapper[5012]: W0219 05:42:16.453339 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddbea515_c638_4619_8940_b23d173ceb8b.slice/crio-5465c2ffb472fdf8f8ad11824f33375fa18d874c433433fe9fd1a48632f10d90 WatchSource:0}: Error finding container 5465c2ffb472fdf8f8ad11824f33375fa18d874c433433fe9fd1a48632f10d90: Status 404 returned error can't find the container with id 5465c2ffb472fdf8f8ad11824f33375fa18d874c433433fe9fd1a48632f10d90 Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.744976 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerStarted","Data":"4ee433ab916c49fcf886f80ee6ab1bd1a03ffacf8d9e4d295c0b15de25056e64"} Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.750794 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.750915 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lj2kq" event={"ID":"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79","Type":"ContainerDied","Data":"768c114b539b2feebb6baf342756cce337f48a86e7b046dcbc36cac8568a33b9"} Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.751452 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="768c114b539b2feebb6baf342756cce337f48a86e7b046dcbc36cac8568a33b9" Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.753681 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cr94m-config-wnbbj" event={"ID":"ddbea515-c638-4619-8940-b23d173ceb8b","Type":"ContainerStarted","Data":"5465c2ffb472fdf8f8ad11824f33375fa18d874c433433fe9fd1a48632f10d90"} Feb 19 05:42:17 crc kubenswrapper[5012]: I0219 05:42:17.765330 5012 generic.go:334] "Generic (PLEG): container finished" podID="ddbea515-c638-4619-8940-b23d173ceb8b" containerID="d8e57b0f2b52b5aa983f227ca12d7b7d13d90cca4cada2357120cb84084b1554" exitCode=0 Feb 19 05:42:17 crc kubenswrapper[5012]: I0219 05:42:17.765570 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cr94m-config-wnbbj" event={"ID":"ddbea515-c638-4619-8940-b23d173ceb8b","Type":"ContainerDied","Data":"d8e57b0f2b52b5aa983f227ca12d7b7d13d90cca4cada2357120cb84084b1554"} Feb 19 05:42:19 crc kubenswrapper[5012]: I0219 05:42:19.929636 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:42:19 crc kubenswrapper[5012]: I0219 05:42:19.940257 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:42:20 crc kubenswrapper[5012]: I0219 05:42:20.016541 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-cr94m" Feb 19 05:42:20 crc kubenswrapper[5012]: I0219 05:42:20.173631 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 05:42:22 crc kubenswrapper[5012]: I0219 05:42:22.819882 5012 generic.go:334] "Generic (PLEG): container finished" podID="8509cc68-c35e-47ea-a634-896143d747ed" containerID="4ee433ab916c49fcf886f80ee6ab1bd1a03ffacf8d9e4d295c0b15de25056e64" exitCode=0 Feb 19 05:42:22 crc kubenswrapper[5012]: I0219 05:42:22.820044 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerDied","Data":"4ee433ab916c49fcf886f80ee6ab1bd1a03ffacf8d9e4d295c0b15de25056e64"} Feb 19 05:42:24 crc kubenswrapper[5012]: I0219 05:42:24.844635 5012 generic.go:334] "Generic (PLEG): container finished" podID="a13d3004-2045-4daf-a925-7eccf541b1b4" containerID="0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e" exitCode=0 Feb 19 05:42:24 crc kubenswrapper[5012]: I0219 05:42:24.844695 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13d3004-2045-4daf-a925-7eccf541b1b4","Type":"ContainerDied","Data":"0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e"} Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.702323 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.860927 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerStarted","Data":"2911dc6ac75bd4dfdfed36bc08cc01049520edecc0e49a7a619bb704bce3f33a"} Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.863213 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run-ovn\") pod \"ddbea515-c638-4619-8940-b23d173ceb8b\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.863321 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-log-ovn\") pod \"ddbea515-c638-4619-8940-b23d173ceb8b\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.863430 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run\") pod \"ddbea515-c638-4619-8940-b23d173ceb8b\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.863516 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-additional-scripts\") pod \"ddbea515-c638-4619-8940-b23d173ceb8b\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.863538 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9lnh\" (UniqueName: \"kubernetes.io/projected/ddbea515-c638-4619-8940-b23d173ceb8b-kube-api-access-s9lnh\") pod \"ddbea515-c638-4619-8940-b23d173ceb8b\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.863555 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-scripts\") pod \"ddbea515-c638-4619-8940-b23d173ceb8b\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.863697 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run" (OuterVolumeSpecName: "var-run") pod "ddbea515-c638-4619-8940-b23d173ceb8b" (UID: "ddbea515-c638-4619-8940-b23d173ceb8b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.864263 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ddbea515-c638-4619-8940-b23d173ceb8b" (UID: "ddbea515-c638-4619-8940-b23d173ceb8b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.864319 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ddbea515-c638-4619-8940-b23d173ceb8b" (UID: "ddbea515-c638-4619-8940-b23d173ceb8b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.864399 5012 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.864569 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ddbea515-c638-4619-8940-b23d173ceb8b" (UID: "ddbea515-c638-4619-8940-b23d173ceb8b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.865205 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-scripts" (OuterVolumeSpecName: "scripts") pod "ddbea515-c638-4619-8940-b23d173ceb8b" (UID: "ddbea515-c638-4619-8940-b23d173ceb8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.867663 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddbea515-c638-4619-8940-b23d173ceb8b-kube-api-access-s9lnh" (OuterVolumeSpecName: "kube-api-access-s9lnh") pod "ddbea515-c638-4619-8940-b23d173ceb8b" (UID: "ddbea515-c638-4619-8940-b23d173ceb8b"). InnerVolumeSpecName "kube-api-access-s9lnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.870724 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cr94m-config-wnbbj" event={"ID":"ddbea515-c638-4619-8940-b23d173ceb8b","Type":"ContainerDied","Data":"5465c2ffb472fdf8f8ad11824f33375fa18d874c433433fe9fd1a48632f10d90"} Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.870755 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5465c2ffb472fdf8f8ad11824f33375fa18d874c433433fe9fd1a48632f10d90" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.870809 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.872970 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13d3004-2045-4daf-a925-7eccf541b1b4","Type":"ContainerStarted","Data":"0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7"} Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.873556 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.902915 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371949.95188 podStartE2EDuration="1m26.902896213s" podCreationTimestamp="2026-02-19 05:40:59 +0000 UTC" firstStartedPulling="2026-02-19 05:41:01.860766879 +0000 UTC m=+957.894089438" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:25.892783449 +0000 UTC m=+1041.926106018" watchObservedRunningTime="2026-02-19 05:42:25.902896213 +0000 UTC m=+1041.936218782" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.965733 5012 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.965772 5012 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.965786 5012 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.965799 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9lnh\" (UniqueName: \"kubernetes.io/projected/ddbea515-c638-4619-8940-b23d173ceb8b-kube-api-access-s9lnh\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.965808 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:26 crc kubenswrapper[5012]: I0219 05:42:26.084401 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 05:42:26 crc kubenswrapper[5012]: W0219 05:42:26.094821 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc089afc3_1655_4675_b4e1_a62ec6929498.slice/crio-ccb3124754843d610271b604712236f7d09f9eda736f99a488a3a4169f3c3630 WatchSource:0}: Error finding container ccb3124754843d610271b604712236f7d09f9eda736f99a488a3a4169f3c3630: Status 404 returned error can't find the container with id ccb3124754843d610271b604712236f7d09f9eda736f99a488a3a4169f3c3630 Feb 19 05:42:26 crc kubenswrapper[5012]: I0219 05:42:26.807756 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cr94m-config-wnbbj"] Feb 19 05:42:26 crc kubenswrapper[5012]: I0219 05:42:26.814996 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-cr94m-config-wnbbj"] Feb 19 05:42:26 crc kubenswrapper[5012]: I0219 05:42:26.881276 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-24p82" event={"ID":"31d56d90-ce06-4de3-9edb-2092780e9afe","Type":"ContainerStarted","Data":"cea9e8e15e555d9e359bdb9e094582010c0f5cb2424bf6d21370cbb196b19806"} Feb 19 05:42:26 crc kubenswrapper[5012]: I0219 05:42:26.882709 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"ccb3124754843d610271b604712236f7d09f9eda736f99a488a3a4169f3c3630"} Feb 19 05:42:26 crc kubenswrapper[5012]: I0219 05:42:26.902177 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-24p82" podStartSLOduration=2.316094413 podStartE2EDuration="21.902162234s" podCreationTimestamp="2026-02-19 05:42:05 +0000 UTC" firstStartedPulling="2026-02-19 05:42:06.028846606 +0000 UTC m=+1022.062169175" lastFinishedPulling="2026-02-19 05:42:25.614914407 +0000 UTC m=+1041.648236996" observedRunningTime="2026-02-19 05:42:26.895071396 +0000 UTC m=+1042.928393965" watchObservedRunningTime="2026-02-19 05:42:26.902162234 +0000 UTC m=+1042.935484793" Feb 19 05:42:27 crc kubenswrapper[5012]: I0219 05:42:27.891395 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"20c23a50bdfbf9304cec5e65cb9884882fe8fa307d92b584685d16b72dcbac8b"} Feb 19 05:42:27 crc kubenswrapper[5012]: I0219 05:42:27.892735 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"8741b7465595a5f514d42d04cd81f4d219a5f0381c43b9b5d436e13968c855ed"} Feb 19 05:42:27 crc kubenswrapper[5012]: I0219 05:42:27.892800 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"2778d95369e66794a3c77675f3e21538fcbdcd4a351b0caa9d18ceb3bdb6f2dd"} Feb 19 05:42:28 crc kubenswrapper[5012]: I0219 05:42:28.722995 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddbea515-c638-4619-8940-b23d173ceb8b" path="/var/lib/kubelet/pods/ddbea515-c638-4619-8940-b23d173ceb8b/volumes" Feb 19 05:42:28 crc kubenswrapper[5012]: I0219 05:42:28.923152 5012 generic.go:334] "Generic (PLEG): container finished" podID="3c628866-f96d-4e7b-8846-7073c98dd389" containerID="39447df96b54f1be84a97ec4a361863f1bba8e92bceec140937b025ac768a708" exitCode=0 Feb 19 05:42:28 crc kubenswrapper[5012]: I0219 05:42:28.923282 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"3c628866-f96d-4e7b-8846-7073c98dd389","Type":"ContainerDied","Data":"39447df96b54f1be84a97ec4a361863f1bba8e92bceec140937b025ac768a708"} Feb 19 05:42:28 crc kubenswrapper[5012]: I0219 05:42:28.929422 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"1c6ce9784b923c0bbc26479298d4861119c1d3b3bfd0e915102814bbf819bbf8"} Feb 19 05:42:29 crc kubenswrapper[5012]: I0219 05:42:29.949182 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"273b9f0584df6769286e4f11afc28c65d47a7cd17b2b7b2f136d0bda5f714c0a"} Feb 19 05:42:29 crc kubenswrapper[5012]: I0219 05:42:29.949668 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"5b5424c23ebc144ff05eae1a5743a9c894ab0c56f4b72231387ca792c10fd8b9"} Feb 19 05:42:29 crc kubenswrapper[5012]: I0219 05:42:29.952710 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerStarted","Data":"2854f6610edd35f9918bcf970a2c86698cd9bdd18894ce4faa0b91d3747adc47"} Feb 19 05:42:29 crc kubenswrapper[5012]: I0219 05:42:29.952786 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerStarted","Data":"fd666beb3889b82cdcffe025f5999afc68f3be8d81898ab269494cf52c444649"} Feb 19 05:42:29 crc kubenswrapper[5012]: I0219 05:42:29.955916 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"3c628866-f96d-4e7b-8846-7073c98dd389","Type":"ContainerStarted","Data":"cf4bbbdaf5f2ee97976e817c4b3fd945d8ee608e48e4aa6fcd69d9696509df7a"} Feb 19 05:42:29 crc kubenswrapper[5012]: I0219 05:42:29.956447 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:42:29 crc kubenswrapper[5012]: I0219 05:42:29.998345 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.998317459 podStartE2EDuration="17.998317459s" podCreationTimestamp="2026-02-19 05:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:29.984846771 +0000 UTC m=+1046.018169340" watchObservedRunningTime="2026-02-19 05:42:29.998317459 +0000 UTC m=+1046.031640028" Feb 19 05:42:30 crc kubenswrapper[5012]: I0219 05:42:30.032441 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/notifications-rabbitmq-server-0" podStartSLOduration=-9223371945.82236 podStartE2EDuration="1m31.032416086s" podCreationTimestamp="2026-02-19 05:40:59 +0000 UTC" firstStartedPulling="2026-02-19 05:41:01.242175511 +0000 UTC m=+957.275498080" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:30.029660847 +0000 UTC m=+1046.062983436" watchObservedRunningTime="2026-02-19 05:42:30.032416086 +0000 UTC m=+1046.065738655" Feb 19 05:42:30 crc kubenswrapper[5012]: I0219 05:42:30.972703 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"6e349173a3c87d71c53312e2eb929429bfb92ac867575ef3a0bc1f3ce175d475"} Feb 19 05:42:30 crc kubenswrapper[5012]: I0219 05:42:30.973092 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"2a1408a7ed3d9b943f0c272db133cf5233981ab5a337afff55e0517ec5872290"} Feb 19 05:42:30 crc kubenswrapper[5012]: I0219 05:42:30.990528 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.587693 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4vdtn"] Feb 19 05:42:31 crc kubenswrapper[5012]: E0219 05:42:31.589485 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddbea515-c638-4619-8940-b23d173ceb8b" containerName="ovn-config" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.589506 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddbea515-c638-4619-8940-b23d173ceb8b" containerName="ovn-config" Feb 19 05:42:31 crc kubenswrapper[5012]: E0219 05:42:31.589536 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c559b49-5b5e-435d-9a6a-66dd1d3cbc79" containerName="mariadb-account-create-update" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.589545 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c559b49-5b5e-435d-9a6a-66dd1d3cbc79" containerName="mariadb-account-create-update" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.590665 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c559b49-5b5e-435d-9a6a-66dd1d3cbc79" containerName="mariadb-account-create-update" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.590691 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddbea515-c638-4619-8940-b23d173ceb8b" containerName="ovn-config" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.591926 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.626774 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4vdtn"] Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.712025 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6b89-account-create-update-65d6l"] Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.717589 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.720819 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6b89-account-create-update-65d6l"] Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.724898 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.774996 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97sch\" (UniqueName: \"kubernetes.io/projected/ff889c32-0dda-4734-a907-54f4a53e649f-kube-api-access-97sch\") pod \"cinder-db-create-4vdtn\" (UID: \"ff889c32-0dda-4734-a907-54f4a53e649f\") " pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.775133 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff889c32-0dda-4734-a907-54f4a53e649f-operator-scripts\") pod \"cinder-db-create-4vdtn\" (UID: \"ff889c32-0dda-4734-a907-54f4a53e649f\") " pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.876763 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97sch\" (UniqueName: \"kubernetes.io/projected/ff889c32-0dda-4734-a907-54f4a53e649f-kube-api-access-97sch\") pod \"cinder-db-create-4vdtn\" (UID: \"ff889c32-0dda-4734-a907-54f4a53e649f\") " pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.876806 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f81d2f2-d61b-49e6-bd6a-f466da52df74-operator-scripts\") pod \"cinder-6b89-account-create-update-65d6l\" (UID: \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\") " pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.876858 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdjdj\" (UniqueName: \"kubernetes.io/projected/0f81d2f2-d61b-49e6-bd6a-f466da52df74-kube-api-access-cdjdj\") pod \"cinder-6b89-account-create-update-65d6l\" (UID: \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\") " pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.876917 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff889c32-0dda-4734-a907-54f4a53e649f-operator-scripts\") pod \"cinder-db-create-4vdtn\" (UID: \"ff889c32-0dda-4734-a907-54f4a53e649f\") " pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.877679 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff889c32-0dda-4734-a907-54f4a53e649f-operator-scripts\") pod \"cinder-db-create-4vdtn\" (UID: \"ff889c32-0dda-4734-a907-54f4a53e649f\") " pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.902132 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97sch\" (UniqueName: \"kubernetes.io/projected/ff889c32-0dda-4734-a907-54f4a53e649f-kube-api-access-97sch\") pod \"cinder-db-create-4vdtn\" (UID: \"ff889c32-0dda-4734-a907-54f4a53e649f\") " pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.968149 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-x7kz5"] Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.969756 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.978826 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f81d2f2-d61b-49e6-bd6a-f466da52df74-operator-scripts\") pod \"cinder-6b89-account-create-update-65d6l\" (UID: \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\") " pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.978933 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdjdj\" (UniqueName: \"kubernetes.io/projected/0f81d2f2-d61b-49e6-bd6a-f466da52df74-kube-api-access-cdjdj\") pod \"cinder-6b89-account-create-update-65d6l\" (UID: \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\") " pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.980177 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f81d2f2-d61b-49e6-bd6a-f466da52df74-operator-scripts\") pod \"cinder-6b89-account-create-update-65d6l\" (UID: \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\") " pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.981788 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.982007 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.982087 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.982342 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dhq72" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.006062 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.031232 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x7kz5"] Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.049152 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdjdj\" (UniqueName: \"kubernetes.io/projected/0f81d2f2-d61b-49e6-bd6a-f466da52df74-kube-api-access-cdjdj\") pod \"cinder-6b89-account-create-update-65d6l\" (UID: \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\") " pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.066510 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-9pk56"] Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.070525 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.077293 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.080169 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-combined-ca-bundle\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.080222 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-config-data\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.080260 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvstr\" (UniqueName: \"kubernetes.io/projected/13b820bd-7677-4b9c-a16f-987e22a71876-kube-api-access-rvstr\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.104208 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9pk56"] Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.182964 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jv9c\" (UniqueName: \"kubernetes.io/projected/a4bd4c60-a255-42cf-8dd0-913737e4b189-kube-api-access-2jv9c\") pod \"barbican-db-create-9pk56\" (UID: \"a4bd4c60-a255-42cf-8dd0-913737e4b189\") " pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.183011 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4bd4c60-a255-42cf-8dd0-913737e4b189-operator-scripts\") pod \"barbican-db-create-9pk56\" (UID: \"a4bd4c60-a255-42cf-8dd0-913737e4b189\") " pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.183066 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-combined-ca-bundle\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.183105 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-config-data\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.183137 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvstr\" (UniqueName: \"kubernetes.io/projected/13b820bd-7677-4b9c-a16f-987e22a71876-kube-api-access-rvstr\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.188266 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-combined-ca-bundle\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.202813 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-config-data\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.225035 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvstr\" (UniqueName: \"kubernetes.io/projected/13b820bd-7677-4b9c-a16f-987e22a71876-kube-api-access-rvstr\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.244571 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8f98-account-create-update-7gqc9"] Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.245685 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.250048 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.285783 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4bd4c60-a255-42cf-8dd0-913737e4b189-operator-scripts\") pod \"barbican-db-create-9pk56\" (UID: \"a4bd4c60-a255-42cf-8dd0-913737e4b189\") " pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.285830 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jv9c\" (UniqueName: \"kubernetes.io/projected/a4bd4c60-a255-42cf-8dd0-913737e4b189-kube-api-access-2jv9c\") pod \"barbican-db-create-9pk56\" (UID: \"a4bd4c60-a255-42cf-8dd0-913737e4b189\") " pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.286841 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4bd4c60-a255-42cf-8dd0-913737e4b189-operator-scripts\") pod \"barbican-db-create-9pk56\" (UID: \"a4bd4c60-a255-42cf-8dd0-913737e4b189\") " pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.290961 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.293804 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8f98-account-create-update-7gqc9"] Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.331042 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jv9c\" (UniqueName: \"kubernetes.io/projected/a4bd4c60-a255-42cf-8dd0-913737e4b189-kube-api-access-2jv9c\") pod \"barbican-db-create-9pk56\" (UID: \"a4bd4c60-a255-42cf-8dd0-913737e4b189\") " pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.393538 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82wh6\" (UniqueName: \"kubernetes.io/projected/5d452976-060b-4c25-9dd0-ffed69bb4d84-kube-api-access-82wh6\") pod \"barbican-8f98-account-create-update-7gqc9\" (UID: \"5d452976-060b-4c25-9dd0-ffed69bb4d84\") " pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.393980 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d452976-060b-4c25-9dd0-ffed69bb4d84-operator-scripts\") pod \"barbican-8f98-account-create-update-7gqc9\" (UID: \"5d452976-060b-4c25-9dd0-ffed69bb4d84\") " pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.481240 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.496105 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82wh6\" (UniqueName: \"kubernetes.io/projected/5d452976-060b-4c25-9dd0-ffed69bb4d84-kube-api-access-82wh6\") pod \"barbican-8f98-account-create-update-7gqc9\" (UID: \"5d452976-060b-4c25-9dd0-ffed69bb4d84\") " pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.496258 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d452976-060b-4c25-9dd0-ffed69bb4d84-operator-scripts\") pod \"barbican-8f98-account-create-update-7gqc9\" (UID: \"5d452976-060b-4c25-9dd0-ffed69bb4d84\") " pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.497454 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d452976-060b-4c25-9dd0-ffed69bb4d84-operator-scripts\") pod \"barbican-8f98-account-create-update-7gqc9\" (UID: \"5d452976-060b-4c25-9dd0-ffed69bb4d84\") " pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.513203 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82wh6\" (UniqueName: \"kubernetes.io/projected/5d452976-060b-4c25-9dd0-ffed69bb4d84-kube-api-access-82wh6\") pod \"barbican-8f98-account-create-update-7gqc9\" (UID: \"5d452976-060b-4c25-9dd0-ffed69bb4d84\") " pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.563053 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.865840 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6b89-account-create-update-65d6l"] Feb 19 05:42:32 crc kubenswrapper[5012]: W0219 05:42:32.869208 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f81d2f2_d61b_49e6_bd6a_f466da52df74.slice/crio-f8aeddc32aef54de5a1f06c7e67e7c5783b73f3abe911b03a5b2f23f653b20f1 WatchSource:0}: Error finding container f8aeddc32aef54de5a1f06c7e67e7c5783b73f3abe911b03a5b2f23f653b20f1: Status 404 returned error can't find the container with id f8aeddc32aef54de5a1f06c7e67e7c5783b73f3abe911b03a5b2f23f653b20f1 Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.886842 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9pk56"] Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.886894 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x7kz5"] Feb 19 05:42:32 crc kubenswrapper[5012]: W0219 05:42:32.899019 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4bd4c60_a255_42cf_8dd0_913737e4b189.slice/crio-a837298bb68d726f843f4a99384eb7f1d37d3beb14d3d3e0f4370d8252349c71 WatchSource:0}: Error finding container a837298bb68d726f843f4a99384eb7f1d37d3beb14d3d3e0f4370d8252349c71: Status 404 returned error can't find the container with id a837298bb68d726f843f4a99384eb7f1d37d3beb14d3d3e0f4370d8252349c71 Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.913906 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4vdtn"] Feb 19 05:42:33 crc kubenswrapper[5012]: I0219 05:42:33.017261 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6b89-account-create-update-65d6l" event={"ID":"0f81d2f2-d61b-49e6-bd6a-f466da52df74","Type":"ContainerStarted","Data":"f8aeddc32aef54de5a1f06c7e67e7c5783b73f3abe911b03a5b2f23f653b20f1"} Feb 19 05:42:33 crc kubenswrapper[5012]: I0219 05:42:33.091986 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"d4c9c0d11d236c1c16135ca688ff4e22570fe4685c67a41a44547dd99313e9df"} Feb 19 05:42:33 crc kubenswrapper[5012]: I0219 05:42:33.094972 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9pk56" event={"ID":"a4bd4c60-a255-42cf-8dd0-913737e4b189","Type":"ContainerStarted","Data":"a837298bb68d726f843f4a99384eb7f1d37d3beb14d3d3e0f4370d8252349c71"} Feb 19 05:42:33 crc kubenswrapper[5012]: I0219 05:42:33.100510 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4vdtn" event={"ID":"ff889c32-0dda-4734-a907-54f4a53e649f","Type":"ContainerStarted","Data":"6b839fe2eefe4d15160c6c95535bcf0af59a5e68c5a3438dd6b22907bc36497b"} Feb 19 05:42:33 crc kubenswrapper[5012]: I0219 05:42:33.112444 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x7kz5" event={"ID":"13b820bd-7677-4b9c-a16f-987e22a71876","Type":"ContainerStarted","Data":"df302881738acf3fcead46288efc5eedd0d4c9dce72c4827a234c232e9e538c5"} Feb 19 05:42:33 crc kubenswrapper[5012]: I0219 05:42:33.141904 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:33 crc kubenswrapper[5012]: I0219 05:42:33.233670 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8f98-account-create-update-7gqc9"] Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.149049 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"a6722afb439e0e4e5f625908511f3b87433439a209c9c6f822a8e7fb25099bf2"} Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.149443 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"74ef8280c60348e1b546c43847c14bfe2e1e3bf77e2eeb709ec4586fa34bef6c"} Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.149454 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"fe46ee46195802c82609a963852a86e90054b800032558598d95fc12cdb70a8f"} Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.149463 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"bb9fc034c4b48df255fa2c97100cae4c35cab371d81c811b9ad308a1bb0532dd"} Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.155083 5012 generic.go:334] "Generic (PLEG): container finished" podID="a4bd4c60-a255-42cf-8dd0-913737e4b189" containerID="7e6d7c6e4279d09faf69cc8325c3a9419e59f879f7e638bdadc3e1a99dfe010e" exitCode=0 Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.155144 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9pk56" event={"ID":"a4bd4c60-a255-42cf-8dd0-913737e4b189","Type":"ContainerDied","Data":"7e6d7c6e4279d09faf69cc8325c3a9419e59f879f7e638bdadc3e1a99dfe010e"} Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.158804 5012 generic.go:334] "Generic (PLEG): container finished" podID="ff889c32-0dda-4734-a907-54f4a53e649f" containerID="67bce0df8bf4cde6aebe2e02939680ed6fbf6f5f67dfee6a477ff8a83ddd570c" exitCode=0 Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.158856 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4vdtn" event={"ID":"ff889c32-0dda-4734-a907-54f4a53e649f","Type":"ContainerDied","Data":"67bce0df8bf4cde6aebe2e02939680ed6fbf6f5f67dfee6a477ff8a83ddd570c"} Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.166712 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8f98-account-create-update-7gqc9" event={"ID":"5d452976-060b-4c25-9dd0-ffed69bb4d84","Type":"ContainerStarted","Data":"20962d8cd5b490b4c52f0881b3105ca6e34c9e56c96152f389a414e4e6b49d12"} Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.166867 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8f98-account-create-update-7gqc9" event={"ID":"5d452976-060b-4c25-9dd0-ffed69bb4d84","Type":"ContainerStarted","Data":"8db0b6dffa18b9baf3f0839909f48a13fe7c1eec48f5ac3faf67f36d3af428c0"} Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.185421 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6b89-account-create-update-65d6l" event={"ID":"0f81d2f2-d61b-49e6-bd6a-f466da52df74","Type":"ContainerStarted","Data":"152353fb3f9bf0d9255bd600198a1803f9e2b42292b1e50815808d78b63cdb99"} Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.196537 5012 generic.go:334] "Generic (PLEG): container finished" podID="5d452976-060b-4c25-9dd0-ffed69bb4d84" containerID="20962d8cd5b490b4c52f0881b3105ca6e34c9e56c96152f389a414e4e6b49d12" exitCode=0 Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.196630 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8f98-account-create-update-7gqc9" event={"ID":"5d452976-060b-4c25-9dd0-ffed69bb4d84","Type":"ContainerDied","Data":"20962d8cd5b490b4c52f0881b3105ca6e34c9e56c96152f389a414e4e6b49d12"} Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.202453 5012 generic.go:334] "Generic (PLEG): container finished" podID="0f81d2f2-d61b-49e6-bd6a-f466da52df74" containerID="152353fb3f9bf0d9255bd600198a1803f9e2b42292b1e50815808d78b63cdb99" exitCode=0 Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.202504 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6b89-account-create-update-65d6l" event={"ID":"0f81d2f2-d61b-49e6-bd6a-f466da52df74","Type":"ContainerDied","Data":"152353fb3f9bf0d9255bd600198a1803f9e2b42292b1e50815808d78b63cdb99"} Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.218087 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"73c1a419260a2d85129ae986042cb4cc178077399618b6b623ae5f7b170ca46b"} Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.218120 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"7f947b1a9c1522b6530cec6ef3b105d86d6b776c3374e7464247a1a9ca114eed"} Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.267632 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=43.755966849 podStartE2EDuration="49.267611855s" podCreationTimestamp="2026-02-19 05:41:46 +0000 UTC" firstStartedPulling="2026-02-19 05:42:26.09735769 +0000 UTC m=+1042.130680249" lastFinishedPulling="2026-02-19 05:42:31.609002686 +0000 UTC m=+1047.642325255" observedRunningTime="2026-02-19 05:42:35.258552458 +0000 UTC m=+1051.291875027" watchObservedRunningTime="2026-02-19 05:42:35.267611855 +0000 UTC m=+1051.300934424" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.567139 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cf875bd99-nt5p5"] Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.572844 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.576241 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.597876 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cf875bd99-nt5p5"] Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.617000 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.684846 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-sb\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.685021 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-config\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.685119 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npj62\" (UniqueName: \"kubernetes.io/projected/0339ab80-3dab-44ef-aa89-49a810242704-kube-api-access-npj62\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.685274 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-svc\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.685330 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-swift-storage-0\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.685356 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-nb\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.755414 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.769847 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.801357 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f81d2f2-d61b-49e6-bd6a-f466da52df74-operator-scripts\") pod \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\" (UID: \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\") " Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.802522 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdjdj\" (UniqueName: \"kubernetes.io/projected/0f81d2f2-d61b-49e6-bd6a-f466da52df74-kube-api-access-cdjdj\") pod \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\" (UID: \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\") " Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.805900 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f81d2f2-d61b-49e6-bd6a-f466da52df74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f81d2f2-d61b-49e6-bd6a-f466da52df74" (UID: "0f81d2f2-d61b-49e6-bd6a-f466da52df74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.805985 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-config\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.806036 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npj62\" (UniqueName: \"kubernetes.io/projected/0339ab80-3dab-44ef-aa89-49a810242704-kube-api-access-npj62\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.806264 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-svc\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.806314 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-swift-storage-0\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.806345 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-nb\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.806386 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-sb\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.806452 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f81d2f2-d61b-49e6-bd6a-f466da52df74-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.808041 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-svc\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.808798 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-config\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.810058 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-nb\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.811035 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.817518 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-sb\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.817657 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f81d2f2-d61b-49e6-bd6a-f466da52df74-kube-api-access-cdjdj" (OuterVolumeSpecName: "kube-api-access-cdjdj") pod "0f81d2f2-d61b-49e6-bd6a-f466da52df74" (UID: "0f81d2f2-d61b-49e6-bd6a-f466da52df74"). InnerVolumeSpecName "kube-api-access-cdjdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.817797 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-swift-storage-0\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.835361 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npj62\" (UniqueName: \"kubernetes.io/projected/0339ab80-3dab-44ef-aa89-49a810242704-kube-api-access-npj62\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.899017 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.906758 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jv9c\" (UniqueName: \"kubernetes.io/projected/a4bd4c60-a255-42cf-8dd0-913737e4b189-kube-api-access-2jv9c\") pod \"a4bd4c60-a255-42cf-8dd0-913737e4b189\" (UID: \"a4bd4c60-a255-42cf-8dd0-913737e4b189\") " Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.906820 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff889c32-0dda-4734-a907-54f4a53e649f-operator-scripts\") pod \"ff889c32-0dda-4734-a907-54f4a53e649f\" (UID: \"ff889c32-0dda-4734-a907-54f4a53e649f\") " Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.906852 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4bd4c60-a255-42cf-8dd0-913737e4b189-operator-scripts\") pod \"a4bd4c60-a255-42cf-8dd0-913737e4b189\" (UID: \"a4bd4c60-a255-42cf-8dd0-913737e4b189\") " Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.906873 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82wh6\" (UniqueName: \"kubernetes.io/projected/5d452976-060b-4c25-9dd0-ffed69bb4d84-kube-api-access-82wh6\") pod \"5d452976-060b-4c25-9dd0-ffed69bb4d84\" (UID: \"5d452976-060b-4c25-9dd0-ffed69bb4d84\") " Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.906901 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97sch\" (UniqueName: \"kubernetes.io/projected/ff889c32-0dda-4734-a907-54f4a53e649f-kube-api-access-97sch\") pod \"ff889c32-0dda-4734-a907-54f4a53e649f\" (UID: \"ff889c32-0dda-4734-a907-54f4a53e649f\") " Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.907219 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d452976-060b-4c25-9dd0-ffed69bb4d84-operator-scripts\") pod \"5d452976-060b-4c25-9dd0-ffed69bb4d84\" (UID: \"5d452976-060b-4c25-9dd0-ffed69bb4d84\") " Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.907681 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdjdj\" (UniqueName: \"kubernetes.io/projected/0f81d2f2-d61b-49e6-bd6a-f466da52df74-kube-api-access-cdjdj\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.908047 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d452976-060b-4c25-9dd0-ffed69bb4d84-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d452976-060b-4c25-9dd0-ffed69bb4d84" (UID: "5d452976-060b-4c25-9dd0-ffed69bb4d84"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.908887 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff889c32-0dda-4734-a907-54f4a53e649f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff889c32-0dda-4734-a907-54f4a53e649f" (UID: "ff889c32-0dda-4734-a907-54f4a53e649f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.911498 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4bd4c60-a255-42cf-8dd0-913737e4b189-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4bd4c60-a255-42cf-8dd0-913737e4b189" (UID: "a4bd4c60-a255-42cf-8dd0-913737e4b189"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.914553 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff889c32-0dda-4734-a907-54f4a53e649f-kube-api-access-97sch" (OuterVolumeSpecName: "kube-api-access-97sch") pod "ff889c32-0dda-4734-a907-54f4a53e649f" (UID: "ff889c32-0dda-4734-a907-54f4a53e649f"). InnerVolumeSpecName "kube-api-access-97sch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.914613 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d452976-060b-4c25-9dd0-ffed69bb4d84-kube-api-access-82wh6" (OuterVolumeSpecName: "kube-api-access-82wh6") pod "5d452976-060b-4c25-9dd0-ffed69bb4d84" (UID: "5d452976-060b-4c25-9dd0-ffed69bb4d84"). InnerVolumeSpecName "kube-api-access-82wh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.914640 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4bd4c60-a255-42cf-8dd0-913737e4b189-kube-api-access-2jv9c" (OuterVolumeSpecName: "kube-api-access-2jv9c") pod "a4bd4c60-a255-42cf-8dd0-913737e4b189" (UID: "a4bd4c60-a255-42cf-8dd0-913737e4b189"). InnerVolumeSpecName "kube-api-access-2jv9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.009900 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d452976-060b-4c25-9dd0-ffed69bb4d84-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.009937 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jv9c\" (UniqueName: \"kubernetes.io/projected/a4bd4c60-a255-42cf-8dd0-913737e4b189-kube-api-access-2jv9c\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.009950 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff889c32-0dda-4734-a907-54f4a53e649f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.009958 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4bd4c60-a255-42cf-8dd0-913737e4b189-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.009968 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82wh6\" (UniqueName: \"kubernetes.io/projected/5d452976-060b-4c25-9dd0-ffed69bb4d84-kube-api-access-82wh6\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.009977 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97sch\" (UniqueName: \"kubernetes.io/projected/ff889c32-0dda-4734-a907-54f4a53e649f-kube-api-access-97sch\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.228171 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9pk56" event={"ID":"a4bd4c60-a255-42cf-8dd0-913737e4b189","Type":"ContainerDied","Data":"a837298bb68d726f843f4a99384eb7f1d37d3beb14d3d3e0f4370d8252349c71"} Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.228510 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a837298bb68d726f843f4a99384eb7f1d37d3beb14d3d3e0f4370d8252349c71" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.228566 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.230037 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4vdtn" event={"ID":"ff889c32-0dda-4734-a907-54f4a53e649f","Type":"ContainerDied","Data":"6b839fe2eefe4d15160c6c95535bcf0af59a5e68c5a3438dd6b22907bc36497b"} Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.230074 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b839fe2eefe4d15160c6c95535bcf0af59a5e68c5a3438dd6b22907bc36497b" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.230133 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.232242 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8f98-account-create-update-7gqc9" event={"ID":"5d452976-060b-4c25-9dd0-ffed69bb4d84","Type":"ContainerDied","Data":"8db0b6dffa18b9baf3f0839909f48a13fe7c1eec48f5ac3faf67f36d3af428c0"} Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.232282 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8db0b6dffa18b9baf3f0839909f48a13fe7c1eec48f5ac3faf67f36d3af428c0" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.232369 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.235755 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6b89-account-create-update-65d6l" event={"ID":"0f81d2f2-d61b-49e6-bd6a-f466da52df74","Type":"ContainerDied","Data":"f8aeddc32aef54de5a1f06c7e67e7c5783b73f3abe911b03a5b2f23f653b20f1"} Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.235856 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8aeddc32aef54de5a1f06c7e67e7c5783b73f3abe911b03a5b2f23f653b20f1" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.235794 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.403473 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cf875bd99-nt5p5"] Feb 19 05:42:37 crc kubenswrapper[5012]: I0219 05:42:37.248676 5012 generic.go:334] "Generic (PLEG): container finished" podID="31d56d90-ce06-4de3-9edb-2092780e9afe" containerID="cea9e8e15e555d9e359bdb9e094582010c0f5cb2424bf6d21370cbb196b19806" exitCode=0 Feb 19 05:42:37 crc kubenswrapper[5012]: I0219 05:42:37.249212 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-24p82" event={"ID":"31d56d90-ce06-4de3-9edb-2092780e9afe","Type":"ContainerDied","Data":"cea9e8e15e555d9e359bdb9e094582010c0f5cb2424bf6d21370cbb196b19806"} Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.268819 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" event={"ID":"0339ab80-3dab-44ef-aa89-49a810242704","Type":"ContainerStarted","Data":"7e45922725c0d6b445eef46d6fb65aea89a26361a9ca27da6cbe105e29fbce60"} Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.271252 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-24p82" event={"ID":"31d56d90-ce06-4de3-9edb-2092780e9afe","Type":"ContainerDied","Data":"aa61f86cc8d1c9a72406e1f686123f265fff57ea31daf565ee1da1a7dabb6d3f"} Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.271281 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa61f86cc8d1c9a72406e1f686123f265fff57ea31daf565ee1da1a7dabb6d3f" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.312108 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-24p82" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.505833 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-config-data\") pod \"31d56d90-ce06-4de3-9edb-2092780e9afe\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.505934 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-combined-ca-bundle\") pod \"31d56d90-ce06-4de3-9edb-2092780e9afe\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.505956 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-db-sync-config-data\") pod \"31d56d90-ce06-4de3-9edb-2092780e9afe\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.506110 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn8l8\" (UniqueName: \"kubernetes.io/projected/31d56d90-ce06-4de3-9edb-2092780e9afe-kube-api-access-kn8l8\") pod \"31d56d90-ce06-4de3-9edb-2092780e9afe\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.510544 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d56d90-ce06-4de3-9edb-2092780e9afe-kube-api-access-kn8l8" (OuterVolumeSpecName: "kube-api-access-kn8l8") pod "31d56d90-ce06-4de3-9edb-2092780e9afe" (UID: "31d56d90-ce06-4de3-9edb-2092780e9afe"). InnerVolumeSpecName "kube-api-access-kn8l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.514408 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "31d56d90-ce06-4de3-9edb-2092780e9afe" (UID: "31d56d90-ce06-4de3-9edb-2092780e9afe"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.536628 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31d56d90-ce06-4de3-9edb-2092780e9afe" (UID: "31d56d90-ce06-4de3-9edb-2092780e9afe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.549239 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-config-data" (OuterVolumeSpecName: "config-data") pod "31d56d90-ce06-4de3-9edb-2092780e9afe" (UID: "31d56d90-ce06-4de3-9edb-2092780e9afe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.611168 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn8l8\" (UniqueName: \"kubernetes.io/projected/31d56d90-ce06-4de3-9edb-2092780e9afe-kube-api-access-kn8l8\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.611213 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.611229 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.611241 5012 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.280971 5012 generic.go:334] "Generic (PLEG): container finished" podID="0339ab80-3dab-44ef-aa89-49a810242704" containerID="0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda" exitCode=0 Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.281061 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" event={"ID":"0339ab80-3dab-44ef-aa89-49a810242704","Type":"ContainerDied","Data":"0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda"} Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.282999 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-24p82" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.283001 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x7kz5" event={"ID":"13b820bd-7677-4b9c-a16f-987e22a71876","Type":"ContainerStarted","Data":"bc1e75b8122059977fabe9b750a293942be5f1e6a7daf5e75f1e50d40f43dd63"} Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.701927 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-x7kz5" podStartSLOduration=3.521443413 podStartE2EDuration="9.701909507s" podCreationTimestamp="2026-02-19 05:42:31 +0000 UTC" firstStartedPulling="2026-02-19 05:42:32.906512621 +0000 UTC m=+1048.939835190" lastFinishedPulling="2026-02-19 05:42:39.086978715 +0000 UTC m=+1055.120301284" observedRunningTime="2026-02-19 05:42:40.388817199 +0000 UTC m=+1056.422139778" watchObservedRunningTime="2026-02-19 05:42:40.701909507 +0000 UTC m=+1056.735232076" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.706925 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="3c628866-f96d-4e7b-8846-7073c98dd389" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.712518 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cf875bd99-nt5p5"] Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.744369 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86d445cf77-758rq"] Feb 19 05:42:40 crc kubenswrapper[5012]: E0219 05:42:40.744968 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4bd4c60-a255-42cf-8dd0-913737e4b189" containerName="mariadb-database-create" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.744985 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4bd4c60-a255-42cf-8dd0-913737e4b189" containerName="mariadb-database-create" Feb 19 05:42:40 crc kubenswrapper[5012]: E0219 05:42:40.745002 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff889c32-0dda-4734-a907-54f4a53e649f" containerName="mariadb-database-create" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745008 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff889c32-0dda-4734-a907-54f4a53e649f" containerName="mariadb-database-create" Feb 19 05:42:40 crc kubenswrapper[5012]: E0219 05:42:40.745023 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f81d2f2-d61b-49e6-bd6a-f466da52df74" containerName="mariadb-account-create-update" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745030 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f81d2f2-d61b-49e6-bd6a-f466da52df74" containerName="mariadb-account-create-update" Feb 19 05:42:40 crc kubenswrapper[5012]: E0219 05:42:40.745039 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d452976-060b-4c25-9dd0-ffed69bb4d84" containerName="mariadb-account-create-update" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745046 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d452976-060b-4c25-9dd0-ffed69bb4d84" containerName="mariadb-account-create-update" Feb 19 05:42:40 crc kubenswrapper[5012]: E0219 05:42:40.745066 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31d56d90-ce06-4de3-9edb-2092780e9afe" containerName="glance-db-sync" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745071 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="31d56d90-ce06-4de3-9edb-2092780e9afe" containerName="glance-db-sync" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745229 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f81d2f2-d61b-49e6-bd6a-f466da52df74" containerName="mariadb-account-create-update" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745263 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="31d56d90-ce06-4de3-9edb-2092780e9afe" containerName="glance-db-sync" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745271 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff889c32-0dda-4734-a907-54f4a53e649f" containerName="mariadb-database-create" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745290 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4bd4c60-a255-42cf-8dd0-913737e4b189" containerName="mariadb-database-create" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745325 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d452976-060b-4c25-9dd0-ffed69bb4d84" containerName="mariadb-account-create-update" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.746200 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.758551 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86d445cf77-758rq"] Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.936291 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-sb\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.936370 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-nb\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.936391 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-config\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.936442 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-swift-storage-0\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.936853 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-svc\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.936928 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8bxt\" (UniqueName: \"kubernetes.io/projected/22696f62-66c5-4302-b9dc-24a981de161e-kube-api-access-j8bxt\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.038062 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-svc\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.038121 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8bxt\" (UniqueName: \"kubernetes.io/projected/22696f62-66c5-4302-b9dc-24a981de161e-kube-api-access-j8bxt\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.038148 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-sb\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.038180 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-nb\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.038198 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-config\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.038234 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-swift-storage-0\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.039041 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-svc\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.039114 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-swift-storage-0\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.039634 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-nb\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.039649 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-sb\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.040106 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-config\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.075061 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8bxt\" (UniqueName: \"kubernetes.io/projected/22696f62-66c5-4302-b9dc-24a981de161e-kube-api-access-j8bxt\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.245528 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.305047 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" event={"ID":"0339ab80-3dab-44ef-aa89-49a810242704","Type":"ContainerStarted","Data":"a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f"} Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.335841 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" podStartSLOduration=6.335821977 podStartE2EDuration="6.335821977s" podCreationTimestamp="2026-02-19 05:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:41.328668967 +0000 UTC m=+1057.361991556" watchObservedRunningTime="2026-02-19 05:42:41.335821977 +0000 UTC m=+1057.369144556" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.368277 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.896863 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86d445cf77-758rq"] Feb 19 05:42:41 crc kubenswrapper[5012]: W0219 05:42:41.900642 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22696f62_66c5_4302_b9dc_24a981de161e.slice/crio-688a7a5c3b7e8a1921835e248109719316499bbe41f38e9de1a4cdf97193bb54 WatchSource:0}: Error finding container 688a7a5c3b7e8a1921835e248109719316499bbe41f38e9de1a4cdf97193bb54: Status 404 returned error can't find the container with id 688a7a5c3b7e8a1921835e248109719316499bbe41f38e9de1a4cdf97193bb54 Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.316916 5012 generic.go:334] "Generic (PLEG): container finished" podID="22696f62-66c5-4302-b9dc-24a981de161e" containerID="4dc545a98a1a2b7d3652e3a5654dc5bb1193d0dcb50d17ba317e9cd60b9261d9" exitCode=0 Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.317249 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" podUID="0339ab80-3dab-44ef-aa89-49a810242704" containerName="dnsmasq-dns" containerID="cri-o://a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f" gracePeriod=10 Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.317399 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d445cf77-758rq" event={"ID":"22696f62-66c5-4302-b9dc-24a981de161e","Type":"ContainerDied","Data":"4dc545a98a1a2b7d3652e3a5654dc5bb1193d0dcb50d17ba317e9cd60b9261d9"} Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.317440 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d445cf77-758rq" event={"ID":"22696f62-66c5-4302-b9dc-24a981de161e","Type":"ContainerStarted","Data":"688a7a5c3b7e8a1921835e248109719316499bbe41f38e9de1a4cdf97193bb54"} Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.317485 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.774904 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.976226 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-config\") pod \"0339ab80-3dab-44ef-aa89-49a810242704\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.976998 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npj62\" (UniqueName: \"kubernetes.io/projected/0339ab80-3dab-44ef-aa89-49a810242704-kube-api-access-npj62\") pod \"0339ab80-3dab-44ef-aa89-49a810242704\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.977106 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-svc\") pod \"0339ab80-3dab-44ef-aa89-49a810242704\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.977390 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-nb\") pod \"0339ab80-3dab-44ef-aa89-49a810242704\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.977566 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-swift-storage-0\") pod \"0339ab80-3dab-44ef-aa89-49a810242704\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.977674 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-sb\") pod \"0339ab80-3dab-44ef-aa89-49a810242704\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.987460 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0339ab80-3dab-44ef-aa89-49a810242704-kube-api-access-npj62" (OuterVolumeSpecName: "kube-api-access-npj62") pod "0339ab80-3dab-44ef-aa89-49a810242704" (UID: "0339ab80-3dab-44ef-aa89-49a810242704"). InnerVolumeSpecName "kube-api-access-npj62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.017998 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0339ab80-3dab-44ef-aa89-49a810242704" (UID: "0339ab80-3dab-44ef-aa89-49a810242704"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.018406 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0339ab80-3dab-44ef-aa89-49a810242704" (UID: "0339ab80-3dab-44ef-aa89-49a810242704"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.019058 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0339ab80-3dab-44ef-aa89-49a810242704" (UID: "0339ab80-3dab-44ef-aa89-49a810242704"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.030708 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-config" (OuterVolumeSpecName: "config") pod "0339ab80-3dab-44ef-aa89-49a810242704" (UID: "0339ab80-3dab-44ef-aa89-49a810242704"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.031609 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0339ab80-3dab-44ef-aa89-49a810242704" (UID: "0339ab80-3dab-44ef-aa89-49a810242704"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.080000 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npj62\" (UniqueName: \"kubernetes.io/projected/0339ab80-3dab-44ef-aa89-49a810242704-kube-api-access-npj62\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.080038 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.080049 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.080057 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.080065 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.080076 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.141827 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.148956 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.329166 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d445cf77-758rq" event={"ID":"22696f62-66c5-4302-b9dc-24a981de161e","Type":"ContainerStarted","Data":"9b422b9b679ceafddb401ce42356c632b11ead6742dea8a3065da32fcafda70c"} Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.331268 5012 generic.go:334] "Generic (PLEG): container finished" podID="0339ab80-3dab-44ef-aa89-49a810242704" containerID="a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f" exitCode=0 Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.331349 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" event={"ID":"0339ab80-3dab-44ef-aa89-49a810242704","Type":"ContainerDied","Data":"a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f"} Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.331408 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" event={"ID":"0339ab80-3dab-44ef-aa89-49a810242704","Type":"ContainerDied","Data":"7e45922725c0d6b445eef46d6fb65aea89a26361a9ca27da6cbe105e29fbce60"} Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.331430 5012 scope.go:117] "RemoveContainer" containerID="a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.331543 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.337841 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.357335 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86d445cf77-758rq" podStartSLOduration=3.357313577 podStartE2EDuration="3.357313577s" podCreationTimestamp="2026-02-19 05:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:43.348223138 +0000 UTC m=+1059.381545717" watchObservedRunningTime="2026-02-19 05:42:43.357313577 +0000 UTC m=+1059.390636146" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.362372 5012 scope.go:117] "RemoveContainer" containerID="0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.374352 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cf875bd99-nt5p5"] Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.383264 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cf875bd99-nt5p5"] Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.385545 5012 scope.go:117] "RemoveContainer" containerID="a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f" Feb 19 05:42:43 crc kubenswrapper[5012]: E0219 05:42:43.385987 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f\": container with ID starting with a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f not found: ID does not exist" containerID="a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.386019 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f"} err="failed to get container status \"a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f\": rpc error: code = NotFound desc = could not find container \"a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f\": container with ID starting with a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f not found: ID does not exist" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.386043 5012 scope.go:117] "RemoveContainer" containerID="0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda" Feb 19 05:42:43 crc kubenswrapper[5012]: E0219 05:42:43.388539 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda\": container with ID starting with 0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda not found: ID does not exist" containerID="0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.388570 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda"} err="failed to get container status \"0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda\": rpc error: code = NotFound desc = could not find container \"0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda\": container with ID starting with 0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda not found: ID does not exist" Feb 19 05:42:44 crc kubenswrapper[5012]: I0219 05:42:44.362458 5012 generic.go:334] "Generic (PLEG): container finished" podID="13b820bd-7677-4b9c-a16f-987e22a71876" containerID="bc1e75b8122059977fabe9b750a293942be5f1e6a7daf5e75f1e50d40f43dd63" exitCode=0 Feb 19 05:42:44 crc kubenswrapper[5012]: I0219 05:42:44.363713 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x7kz5" event={"ID":"13b820bd-7677-4b9c-a16f-987e22a71876","Type":"ContainerDied","Data":"bc1e75b8122059977fabe9b750a293942be5f1e6a7daf5e75f1e50d40f43dd63"} Feb 19 05:42:44 crc kubenswrapper[5012]: I0219 05:42:44.363745 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:44 crc kubenswrapper[5012]: I0219 05:42:44.431486 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:42:44 crc kubenswrapper[5012]: I0219 05:42:44.431553 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:42:44 crc kubenswrapper[5012]: I0219 05:42:44.725012 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0339ab80-3dab-44ef-aa89-49a810242704" path="/var/lib/kubelet/pods/0339ab80-3dab-44ef-aa89-49a810242704/volumes" Feb 19 05:42:45 crc kubenswrapper[5012]: I0219 05:42:45.798236 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:45 crc kubenswrapper[5012]: I0219 05:42:45.941575 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-config-data\") pod \"13b820bd-7677-4b9c-a16f-987e22a71876\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " Feb 19 05:42:45 crc kubenswrapper[5012]: I0219 05:42:45.941690 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvstr\" (UniqueName: \"kubernetes.io/projected/13b820bd-7677-4b9c-a16f-987e22a71876-kube-api-access-rvstr\") pod \"13b820bd-7677-4b9c-a16f-987e22a71876\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " Feb 19 05:42:45 crc kubenswrapper[5012]: I0219 05:42:45.941738 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-combined-ca-bundle\") pod \"13b820bd-7677-4b9c-a16f-987e22a71876\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " Feb 19 05:42:45 crc kubenswrapper[5012]: I0219 05:42:45.950279 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b820bd-7677-4b9c-a16f-987e22a71876-kube-api-access-rvstr" (OuterVolumeSpecName: "kube-api-access-rvstr") pod "13b820bd-7677-4b9c-a16f-987e22a71876" (UID: "13b820bd-7677-4b9c-a16f-987e22a71876"). InnerVolumeSpecName "kube-api-access-rvstr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:45 crc kubenswrapper[5012]: I0219 05:42:45.976747 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13b820bd-7677-4b9c-a16f-987e22a71876" (UID: "13b820bd-7677-4b9c-a16f-987e22a71876"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.008027 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-config-data" (OuterVolumeSpecName: "config-data") pod "13b820bd-7677-4b9c-a16f-987e22a71876" (UID: "13b820bd-7677-4b9c-a16f-987e22a71876"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.044440 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.044559 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvstr\" (UniqueName: \"kubernetes.io/projected/13b820bd-7677-4b9c-a16f-987e22a71876-kube-api-access-rvstr\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.044623 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.387296 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x7kz5" event={"ID":"13b820bd-7677-4b9c-a16f-987e22a71876","Type":"ContainerDied","Data":"df302881738acf3fcead46288efc5eedd0d4c9dce72c4827a234c232e9e538c5"} Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.387813 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df302881738acf3fcead46288efc5eedd0d4c9dce72c4827a234c232e9e538c5" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.387741 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.727358 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kqsgz"] Feb 19 05:42:46 crc kubenswrapper[5012]: E0219 05:42:46.730930 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b820bd-7677-4b9c-a16f-987e22a71876" containerName="keystone-db-sync" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.731011 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b820bd-7677-4b9c-a16f-987e22a71876" containerName="keystone-db-sync" Feb 19 05:42:46 crc kubenswrapper[5012]: E0219 05:42:46.731069 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0339ab80-3dab-44ef-aa89-49a810242704" containerName="dnsmasq-dns" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.731118 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0339ab80-3dab-44ef-aa89-49a810242704" containerName="dnsmasq-dns" Feb 19 05:42:46 crc kubenswrapper[5012]: E0219 05:42:46.731141 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0339ab80-3dab-44ef-aa89-49a810242704" containerName="init" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.731147 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0339ab80-3dab-44ef-aa89-49a810242704" containerName="init" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.731335 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0339ab80-3dab-44ef-aa89-49a810242704" containerName="dnsmasq-dns" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.731350 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="13b820bd-7677-4b9c-a16f-987e22a71876" containerName="keystone-db-sync" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.731939 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.737239 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.737483 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.737576 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.737536 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dhq72" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.737792 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.745777 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kqsgz"] Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.789820 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86d445cf77-758rq"] Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.790036 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86d445cf77-758rq" podUID="22696f62-66c5-4302-b9dc-24a981de161e" containerName="dnsmasq-dns" containerID="cri-o://9b422b9b679ceafddb401ce42356c632b11ead6742dea8a3065da32fcafda70c" gracePeriod=10 Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.862868 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-credential-keys\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.862944 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-fernet-keys\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.863002 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-scripts\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.863044 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.863061 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7hsh\" (UniqueName: \"kubernetes.io/projected/25558255-c27f-4f6e-a838-675ae8ec77b6-kube-api-access-d7hsh\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.863095 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-combined-ca-bundle\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.900521 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cc5b45897-x97md"] Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.947191 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cc5b45897-x97md"] Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.947291 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.971139 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-credential-keys\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.973077 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-fernet-keys\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.973221 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-scripts\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.973263 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.973279 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7hsh\" (UniqueName: \"kubernetes.io/projected/25558255-c27f-4f6e-a838-675ae8ec77b6-kube-api-access-d7hsh\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.973326 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-combined-ca-bundle\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.977864 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-credential-keys\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.977996 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-fernet-keys\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.987222 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.990723 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-scripts\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.994850 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-combined-ca-bundle\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.071920 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7hsh\" (UniqueName: \"kubernetes.io/projected/25558255-c27f-4f6e-a838-675ae8ec77b6-kube-api-access-d7hsh\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.075886 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbncd\" (UniqueName: \"kubernetes.io/projected/3d569c4f-6582-4673-a847-2243e668635d-kube-api-access-dbncd\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.075970 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-swift-storage-0\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.076001 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-config\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.076031 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-svc\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.076102 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-nb\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.076154 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-sb\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.179260 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-swift-storage-0\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.179574 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-config\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.179700 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-svc\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.179773 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-nb\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.179829 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-sb\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.179867 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbncd\" (UniqueName: \"kubernetes.io/projected/3d569c4f-6582-4673-a847-2243e668635d-kube-api-access-dbncd\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.180698 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-swift-storage-0\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.180994 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-config\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.182004 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-nb\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.182135 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-svc\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.182958 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-sb\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.197742 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56f66dc579-dpndj"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.202650 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.206493 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.206836 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.221779 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.222619 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-2s75z" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.241273 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbncd\" (UniqueName: \"kubernetes.io/projected/3d569c4f-6582-4673-a847-2243e668635d-kube-api-access-dbncd\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.248650 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56f66dc579-dpndj"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.283054 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.286446 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.288542 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.289348 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.289908 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.349008 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-w9g6v"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.350187 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.363583 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dzmq8" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.363836 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.366954 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.369328 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.377399 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.389205 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb1825de-9782-4820-96aa-d4909a0f7820-horizon-secret-key\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.389288 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb1825de-9782-4820-96aa-d4909a0f7820-logs\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.389333 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-config-data\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.389383 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9stt\" (UniqueName: \"kubernetes.io/projected/cb1825de-9782-4820-96aa-d4909a0f7820-kube-api-access-p9stt\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.389444 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-scripts\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.393858 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w9g6v"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.446259 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cc5b45897-x97md"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.449317 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.451169 5012 generic.go:334] "Generic (PLEG): container finished" podID="22696f62-66c5-4302-b9dc-24a981de161e" containerID="9b422b9b679ceafddb401ce42356c632b11ead6742dea8a3065da32fcafda70c" exitCode=0 Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.453517 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d445cf77-758rq" event={"ID":"22696f62-66c5-4302-b9dc-24a981de161e","Type":"ContainerDied","Data":"9b422b9b679ceafddb401ce42356c632b11ead6742dea8a3065da32fcafda70c"} Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.453691 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.459357 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pmvmf" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.459747 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.459876 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.459988 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.464826 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69cc8c4d6f-zkg8h"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.472392 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.490514 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9stt\" (UniqueName: \"kubernetes.io/projected/cb1825de-9782-4820-96aa-d4909a0f7820-kube-api-access-p9stt\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.492539 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rnk9\" (UniqueName: \"kubernetes.io/projected/be803869-4625-418d-bd39-bdbb4e6e0bfd-kube-api-access-6rnk9\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.491638 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.492658 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be803869-4625-418d-bd39-bdbb4e6e0bfd-logs\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.492727 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-scripts\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.492788 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-config-data\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.492993 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb1825de-9782-4820-96aa-d4909a0f7820-horizon-secret-key\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493070 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-run-httpd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493244 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493265 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qkmd\" (UniqueName: \"kubernetes.io/projected/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-kube-api-access-6qkmd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493508 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-combined-ca-bundle\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493551 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493644 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb1825de-9782-4820-96aa-d4909a0f7820-logs\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493703 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-config-data\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493728 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-scripts\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493764 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-config-data\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493817 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-log-httpd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493853 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-scripts\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493879 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-scripts\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.499899 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb1825de-9782-4820-96aa-d4909a0f7820-horizon-secret-key\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.499997 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb1825de-9782-4820-96aa-d4909a0f7820-logs\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.504894 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c45b5647f-k799c"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.508064 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.512417 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9stt\" (UniqueName: \"kubernetes.io/projected/cb1825de-9782-4820-96aa-d4909a0f7820-kube-api-access-p9stt\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.513191 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-config-data\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.517600 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69cc8c4d6f-zkg8h"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.542943 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c45b5647f-k799c"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.562713 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.564628 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.566828 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.567613 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.573165 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.591097 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601071 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601244 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frcn7\" (UniqueName: \"kubernetes.io/projected/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-kube-api-access-frcn7\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601351 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-log-httpd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601473 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-scripts\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601544 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-scripts\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601707 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-svc\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601774 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601803 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rnk9\" (UniqueName: \"kubernetes.io/projected/be803869-4625-418d-bd39-bdbb4e6e0bfd-kube-api-access-6rnk9\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601818 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-config\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601834 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-config-data\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601885 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhfdv\" (UniqueName: \"kubernetes.io/projected/13bff5bd-2005-4cce-986a-5bcd2d5a396c-kube-api-access-zhfdv\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601922 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601970 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be803869-4625-418d-bd39-bdbb4e6e0bfd-logs\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602034 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-logs\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602059 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-log-httpd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602078 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-config-data\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602519 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-swift-storage-0\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602599 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-run-httpd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602633 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be803869-4625-418d-bd39-bdbb4e6e0bfd-logs\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602719 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602742 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qkmd\" (UniqueName: \"kubernetes.io/projected/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-kube-api-access-6qkmd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602768 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-combined-ca-bundle\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602796 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602829 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-nb\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602870 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-sb\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602903 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602938 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-scripts\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602964 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-config-data\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.605246 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-scripts\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.607070 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-config-data\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.607539 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-run-httpd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.608090 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-jzclm"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.610248 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.612273 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hg9kp" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.613629 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.613792 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-config-data\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.614292 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-combined-ca-bundle\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.618641 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-scripts\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.619011 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.627574 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.637542 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jzclm"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.637553 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qkmd\" (UniqueName: \"kubernetes.io/projected/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-kube-api-access-6qkmd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.639026 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rnk9\" (UniqueName: \"kubernetes.io/projected/be803869-4625-418d-bd39-bdbb4e6e0bfd-kube-api-access-6rnk9\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.704156 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-logs\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.705464 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.705544 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.705703 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwgzm\" (UniqueName: \"kubernetes.io/projected/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-kube-api-access-hwgzm\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.705771 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d5eb71f6-31df-418a-98dd-11668ff38825-horizon-secret-key\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.705840 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-svc\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.705912 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.705978 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-config\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706054 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-config-data\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706122 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-combined-ca-bundle\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706189 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhfdv\" (UniqueName: \"kubernetes.io/projected/13bff5bd-2005-4cce-986a-5bcd2d5a396c-kube-api-access-zhfdv\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706252 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5eb71f6-31df-418a-98dd-11668ff38825-logs\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706334 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706405 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-db-sync-config-data\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706497 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-scripts\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706563 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706627 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-logs\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706694 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706775 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706842 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-swift-storage-0\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706913 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-config-data\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706995 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.707077 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z95mm\" (UniqueName: \"kubernetes.io/projected/a34a979c-9102-471f-9678-048fd5198cb8-kube-api-access-z95mm\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.707143 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-nb\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.707210 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-sb\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.707342 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.707421 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq85g\" (UniqueName: \"kubernetes.io/projected/d5eb71f6-31df-418a-98dd-11668ff38825-kube-api-access-sq85g\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.707501 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.707564 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frcn7\" (UniqueName: \"kubernetes.io/projected/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-kube-api-access-frcn7\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.707636 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-scripts\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.710751 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.711598 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-logs\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.712451 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-swift-storage-0\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.712497 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-nb\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.713119 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.713277 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-scripts\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.713885 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-config\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.714226 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-svc\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.714376 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.714426 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-sb\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.724758 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.729421 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.730136 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-config-data\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.733200 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhfdv\" (UniqueName: \"kubernetes.io/projected/13bff5bd-2005-4cce-986a-5bcd2d5a396c-kube-api-access-zhfdv\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.733802 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frcn7\" (UniqueName: \"kubernetes.io/projected/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-kube-api-access-frcn7\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.751426 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cc5b45897-x97md"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.765832 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.800624 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810034 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810428 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810471 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-config-data\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810515 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810535 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z95mm\" (UniqueName: \"kubernetes.io/projected/a34a979c-9102-471f-9678-048fd5198cb8-kube-api-access-z95mm\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810564 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq85g\" (UniqueName: \"kubernetes.io/projected/d5eb71f6-31df-418a-98dd-11668ff38825-kube-api-access-sq85g\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810602 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-logs\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810629 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810644 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810661 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwgzm\" (UniqueName: \"kubernetes.io/projected/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-kube-api-access-hwgzm\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810679 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d5eb71f6-31df-418a-98dd-11668ff38825-horizon-secret-key\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810707 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-combined-ca-bundle\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810727 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5eb71f6-31df-418a-98dd-11668ff38825-logs\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810748 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-db-sync-config-data\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810769 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-scripts\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810786 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810819 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.811941 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.813079 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-logs\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.813429 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-scripts\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.813689 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.817922 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.818106 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.818126 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-config-data\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.818373 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5eb71f6-31df-418a-98dd-11668ff38825-logs\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.820861 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-db-sync-config-data\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.831624 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-combined-ca-bundle\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.832120 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.841961 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.854803 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d5eb71f6-31df-418a-98dd-11668ff38825-horizon-secret-key\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.865397 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq85g\" (UniqueName: \"kubernetes.io/projected/d5eb71f6-31df-418a-98dd-11668ff38825-kube-api-access-sq85g\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.868042 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z95mm\" (UniqueName: \"kubernetes.io/projected/a34a979c-9102-471f-9678-048fd5198cb8-kube-api-access-z95mm\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.881380 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwgzm\" (UniqueName: \"kubernetes.io/projected/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-kube-api-access-hwgzm\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.916984 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.948623 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.007850 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.076391 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kqsgz"] Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.137974 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.153403 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56f66dc579-dpndj"] Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.191750 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.271405 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w9g6v"] Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.330918 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.448069 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8bxt\" (UniqueName: \"kubernetes.io/projected/22696f62-66c5-4302-b9dc-24a981de161e-kube-api-access-j8bxt\") pod \"22696f62-66c5-4302-b9dc-24a981de161e\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.448146 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-nb\") pod \"22696f62-66c5-4302-b9dc-24a981de161e\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.448214 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-swift-storage-0\") pod \"22696f62-66c5-4302-b9dc-24a981de161e\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.448245 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-svc\") pod \"22696f62-66c5-4302-b9dc-24a981de161e\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.448270 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-config\") pod \"22696f62-66c5-4302-b9dc-24a981de161e\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.448332 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-sb\") pod \"22696f62-66c5-4302-b9dc-24a981de161e\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.454708 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22696f62-66c5-4302-b9dc-24a981de161e-kube-api-access-j8bxt" (OuterVolumeSpecName: "kube-api-access-j8bxt") pod "22696f62-66c5-4302-b9dc-24a981de161e" (UID: "22696f62-66c5-4302-b9dc-24a981de161e"). InnerVolumeSpecName "kube-api-access-j8bxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.475028 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f66dc579-dpndj" event={"ID":"cb1825de-9782-4820-96aa-d4909a0f7820","Type":"ContainerStarted","Data":"9f3f55ad97ef9c22bb96987a2cbaf0c250aabbcc040a9c414cfd11f3987fe5ea"} Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.482038 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w9g6v" event={"ID":"be803869-4625-418d-bd39-bdbb4e6e0bfd","Type":"ContainerStarted","Data":"1bf63392d872a713c6fdde27be345aac65be8a37d2e0427ef52052d66a795c4c"} Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.500093 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kqsgz" event={"ID":"25558255-c27f-4f6e-a838-675ae8ec77b6","Type":"ContainerStarted","Data":"1512f5c39e8f19ace9b3040d9e0b560368c4be80736d22a497fc8bc26c80da61"} Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.504258 5012 generic.go:334] "Generic (PLEG): container finished" podID="3d569c4f-6582-4673-a847-2243e668635d" containerID="6e7d52bd562ed0efb2a1545f6259d1676314bebb149958773ecde4d783c3e952" exitCode=0 Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.504312 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cc5b45897-x97md" event={"ID":"3d569c4f-6582-4673-a847-2243e668635d","Type":"ContainerDied","Data":"6e7d52bd562ed0efb2a1545f6259d1676314bebb149958773ecde4d783c3e952"} Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.504329 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cc5b45897-x97md" event={"ID":"3d569c4f-6582-4673-a847-2243e668635d","Type":"ContainerStarted","Data":"7d63ba2ee2576ca17b6efd2d48b316dd1077ddf56df509a2d331138612d7e898"} Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.537857 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "22696f62-66c5-4302-b9dc-24a981de161e" (UID: "22696f62-66c5-4302-b9dc-24a981de161e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.538106 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d445cf77-758rq" event={"ID":"22696f62-66c5-4302-b9dc-24a981de161e","Type":"ContainerDied","Data":"688a7a5c3b7e8a1921835e248109719316499bbe41f38e9de1a4cdf97193bb54"} Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.538147 5012 scope.go:117] "RemoveContainer" containerID="9b422b9b679ceafddb401ce42356c632b11ead6742dea8a3065da32fcafda70c" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.538402 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.540040 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69cc8c4d6f-zkg8h"] Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.550656 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.550681 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8bxt\" (UniqueName: \"kubernetes.io/projected/22696f62-66c5-4302-b9dc-24a981de161e-kube-api-access-j8bxt\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.616669 5012 scope.go:117] "RemoveContainer" containerID="4dc545a98a1a2b7d3652e3a5654dc5bb1193d0dcb50d17ba317e9cd60b9261d9" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.647057 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jzclm"] Feb 19 05:42:48 crc kubenswrapper[5012]: W0219 05:42:48.681778 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda34a979c_9102_471f_9678_048fd5198cb8.slice/crio-15ee0e6aea238f0e16da222d8f4f49d691f91234f9216b9e8070275343d6a969 WatchSource:0}: Error finding container 15ee0e6aea238f0e16da222d8f4f49d691f91234f9216b9e8070275343d6a969: Status 404 returned error can't find the container with id 15ee0e6aea238f0e16da222d8f4f49d691f91234f9216b9e8070275343d6a969 Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.690252 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.716624 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "22696f62-66c5-4302-b9dc-24a981de161e" (UID: "22696f62-66c5-4302-b9dc-24a981de161e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.731589 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-config" (OuterVolumeSpecName: "config") pod "22696f62-66c5-4302-b9dc-24a981de161e" (UID: "22696f62-66c5-4302-b9dc-24a981de161e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.740551 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "22696f62-66c5-4302-b9dc-24a981de161e" (UID: "22696f62-66c5-4302-b9dc-24a981de161e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.755917 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.755950 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.755961 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.759460 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "22696f62-66c5-4302-b9dc-24a981de161e" (UID: "22696f62-66c5-4302-b9dc-24a981de161e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:48 crc kubenswrapper[5012]: W0219 05:42:48.762170 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bd6edb4_0376_458f_bb9d_f24e5e7ff47b.slice/crio-7e8d6baa89d2887533fedd350653f8112826dc19a88f8494ecc19699d4368a44 WatchSource:0}: Error finding container 7e8d6baa89d2887533fedd350653f8112826dc19a88f8494ecc19699d4368a44: Status 404 returned error can't find the container with id 7e8d6baa89d2887533fedd350653f8112826dc19a88f8494ecc19699d4368a44 Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.762249 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.877526 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.911261 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c45b5647f-k799c"] Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.960645 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86d445cf77-758rq"] Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.992917 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86d445cf77-758rq"] Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.079836 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.172254 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.182159 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-nb\") pod \"3d569c4f-6582-4673-a847-2243e668635d\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.182255 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbncd\" (UniqueName: \"kubernetes.io/projected/3d569c4f-6582-4673-a847-2243e668635d-kube-api-access-dbncd\") pod \"3d569c4f-6582-4673-a847-2243e668635d\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.182310 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-svc\") pod \"3d569c4f-6582-4673-a847-2243e668635d\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.182346 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-config\") pod \"3d569c4f-6582-4673-a847-2243e668635d\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.182464 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-sb\") pod \"3d569c4f-6582-4673-a847-2243e668635d\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.182511 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-swift-storage-0\") pod \"3d569c4f-6582-4673-a847-2243e668635d\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.239796 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d569c4f-6582-4673-a847-2243e668635d-kube-api-access-dbncd" (OuterVolumeSpecName: "kube-api-access-dbncd") pod "3d569c4f-6582-4673-a847-2243e668635d" (UID: "3d569c4f-6582-4673-a847-2243e668635d"). InnerVolumeSpecName "kube-api-access-dbncd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.253926 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3d569c4f-6582-4673-a847-2243e668635d" (UID: "3d569c4f-6582-4673-a847-2243e668635d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.265664 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-config" (OuterVolumeSpecName: "config") pod "3d569c4f-6582-4673-a847-2243e668635d" (UID: "3d569c4f-6582-4673-a847-2243e668635d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.266854 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3d569c4f-6582-4673-a847-2243e668635d" (UID: "3d569c4f-6582-4673-a847-2243e668635d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.284809 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3d569c4f-6582-4673-a847-2243e668635d" (UID: "3d569c4f-6582-4673-a847-2243e668635d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.285896 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbncd\" (UniqueName: \"kubernetes.io/projected/3d569c4f-6582-4673-a847-2243e668635d-kube-api-access-dbncd\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.285915 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.285923 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.285931 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.285939 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.317920 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3d569c4f-6582-4673-a847-2243e668635d" (UID: "3d569c4f-6582-4673-a847-2243e668635d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.387056 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.588331 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jzclm" event={"ID":"a34a979c-9102-471f-9678-048fd5198cb8","Type":"ContainerStarted","Data":"15ee0e6aea238f0e16da222d8f4f49d691f91234f9216b9e8070275343d6a969"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.593165 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" event={"ID":"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7","Type":"ContainerStarted","Data":"46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.593263 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" event={"ID":"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7","Type":"ContainerStarted","Data":"a1291378cdde1b6340e354ff4d89e75f3fa2d7a84c8a3f64370b1decfc0c8b1c"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.598201 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kqsgz" event={"ID":"25558255-c27f-4f6e-a838-675ae8ec77b6","Type":"ContainerStarted","Data":"12a292fc1b8e4523fdc0fb30ca3590a1b6b6f0c70c3e42e076f92a7b213241f2"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.609134 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cc5b45897-x97md" event={"ID":"3d569c4f-6582-4673-a847-2243e668635d","Type":"ContainerDied","Data":"7d63ba2ee2576ca17b6efd2d48b316dd1077ddf56df509a2d331138612d7e898"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.609182 5012 scope.go:117] "RemoveContainer" containerID="6e7d52bd562ed0efb2a1545f6259d1676314bebb149958773ecde4d783c3e952" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.609262 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.624531 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"13bff5bd-2005-4cce-986a-5bcd2d5a396c","Type":"ContainerStarted","Data":"15902dc00744af1a937cdb4358bfbecf5055d748ea34206757e15f3417243c32"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.632571 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b53da41-1ee4-4a06-b1ad-2f689fafd2be","Type":"ContainerStarted","Data":"d5e045aceaaad28fe4dee87429ebb210f9d9b506f56cee1ed148bccaa4202c45"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.640434 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kqsgz" podStartSLOduration=3.640401788 podStartE2EDuration="3.640401788s" podCreationTimestamp="2026-02-19 05:42:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:49.629975006 +0000 UTC m=+1065.663297575" watchObservedRunningTime="2026-02-19 05:42:49.640401788 +0000 UTC m=+1065.673724357" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.658185 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c45b5647f-k799c" event={"ID":"d5eb71f6-31df-418a-98dd-11668ff38825","Type":"ContainerStarted","Data":"86338dd7d36f9586a8f23b3288040adf41c4f986fc6d17aadaff0853e2749dd7"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.659631 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerStarted","Data":"7e8d6baa89d2887533fedd350653f8112826dc19a88f8494ecc19699d4368a44"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.731897 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cc5b45897-x97md"] Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.829119 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cc5b45897-x97md"] Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.884707 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56f66dc579-dpndj"] Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.939478 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.966348 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-855998b9f9-lkm6w"] Feb 19 05:42:49 crc kubenswrapper[5012]: E0219 05:42:49.966788 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22696f62-66c5-4302-b9dc-24a981de161e" containerName="dnsmasq-dns" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.966801 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="22696f62-66c5-4302-b9dc-24a981de161e" containerName="dnsmasq-dns" Feb 19 05:42:49 crc kubenswrapper[5012]: E0219 05:42:49.966830 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d569c4f-6582-4673-a847-2243e668635d" containerName="init" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.966839 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d569c4f-6582-4673-a847-2243e668635d" containerName="init" Feb 19 05:42:49 crc kubenswrapper[5012]: E0219 05:42:49.966857 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22696f62-66c5-4302-b9dc-24a981de161e" containerName="init" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.966863 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="22696f62-66c5-4302-b9dc-24a981de161e" containerName="init" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.967051 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="22696f62-66c5-4302-b9dc-24a981de161e" containerName="dnsmasq-dns" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.967065 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d569c4f-6582-4673-a847-2243e668635d" containerName="init" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.968098 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.987516 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-855998b9f9-lkm6w"] Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.006677 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.018886 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.118938 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-scripts\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.119410 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-config-data\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.119472 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f06c7918-a7b3-4041-bd16-63a73e47bf13-logs\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.119540 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f06c7918-a7b3-4041-bd16-63a73e47bf13-horizon-secret-key\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.119562 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdkkv\" (UniqueName: \"kubernetes.io/projected/f06c7918-a7b3-4041-bd16-63a73e47bf13-kube-api-access-rdkkv\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.228669 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f06c7918-a7b3-4041-bd16-63a73e47bf13-logs\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.228726 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f06c7918-a7b3-4041-bd16-63a73e47bf13-horizon-secret-key\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.228752 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdkkv\" (UniqueName: \"kubernetes.io/projected/f06c7918-a7b3-4041-bd16-63a73e47bf13-kube-api-access-rdkkv\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.228797 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-scripts\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.228851 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-config-data\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.230050 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-config-data\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.230666 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f06c7918-a7b3-4041-bd16-63a73e47bf13-logs\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.231052 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-scripts\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.236046 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f06c7918-a7b3-4041-bd16-63a73e47bf13-horizon-secret-key\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.249724 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdkkv\" (UniqueName: \"kubernetes.io/projected/f06c7918-a7b3-4041-bd16-63a73e47bf13-kube-api-access-rdkkv\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.355934 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.689862 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"13bff5bd-2005-4cce-986a-5bcd2d5a396c","Type":"ContainerStarted","Data":"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc"} Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.695615 5012 generic.go:334] "Generic (PLEG): container finished" podID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" containerID="46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe" exitCode=0 Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.695715 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" event={"ID":"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7","Type":"ContainerDied","Data":"46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe"} Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.695773 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" event={"ID":"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7","Type":"ContainerStarted","Data":"cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23"} Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.695791 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.715164 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22696f62-66c5-4302-b9dc-24a981de161e" path="/var/lib/kubelet/pods/22696f62-66c5-4302-b9dc-24a981de161e/volumes" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.715932 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d569c4f-6582-4673-a847-2243e668635d" path="/var/lib/kubelet/pods/3d569c4f-6582-4673-a847-2243e668635d/volumes" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.720630 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.723775 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" podStartSLOduration=3.7237547920000003 podStartE2EDuration="3.723754792s" podCreationTimestamp="2026-02-19 05:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:50.723442094 +0000 UTC m=+1066.756764683" watchObservedRunningTime="2026-02-19 05:42:50.723754792 +0000 UTC m=+1066.757077361" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.936801 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-855998b9f9-lkm6w"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.289590 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-xj7dw"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.290834 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.308550 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-gfhmj"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.309683 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gfhmj" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.320290 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.320475 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.320582 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-c2ldt" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.340467 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c45b5647f-k799c"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.367756 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xj7dw"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.394243 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gfhmj"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.451360 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-cdj57"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.452604 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.460355 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-6chdl" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.460541 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.461499 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-combined-ca-bundle\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.461565 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-db-sync-config-data\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.461611 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-config-data\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.461631 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwnxp\" (UniqueName: \"kubernetes.io/projected/8c63064a-a5f1-48da-b11c-eb76b04e3397-kube-api-access-fwnxp\") pod \"neutron-db-create-gfhmj\" (UID: \"8c63064a-a5f1-48da-b11c-eb76b04e3397\") " pod="openstack/neutron-db-create-gfhmj" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.461667 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c63064a-a5f1-48da-b11c-eb76b04e3397-operator-scripts\") pod \"neutron-db-create-gfhmj\" (UID: \"8c63064a-a5f1-48da-b11c-eb76b04e3397\") " pod="openstack/neutron-db-create-gfhmj" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.461694 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sghmp\" (UniqueName: \"kubernetes.io/projected/b98c972c-b350-44a1-a7c5-028914fe7bfc-kube-api-access-sghmp\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.461737 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b98c972c-b350-44a1-a7c5-028914fe7bfc-etc-machine-id\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.461755 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-scripts\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.466012 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-cdj57"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.529566 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75cc7d9585-x8r8l"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.532089 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.570709 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwnxp\" (UniqueName: \"kubernetes.io/projected/8c63064a-a5f1-48da-b11c-eb76b04e3397-kube-api-access-fwnxp\") pod \"neutron-db-create-gfhmj\" (UID: \"8c63064a-a5f1-48da-b11c-eb76b04e3397\") " pod="openstack/neutron-db-create-gfhmj" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.570788 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c63064a-a5f1-48da-b11c-eb76b04e3397-operator-scripts\") pod \"neutron-db-create-gfhmj\" (UID: \"8c63064a-a5f1-48da-b11c-eb76b04e3397\") " pod="openstack/neutron-db-create-gfhmj" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.570829 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-config-data\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.570859 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sghmp\" (UniqueName: \"kubernetes.io/projected/b98c972c-b350-44a1-a7c5-028914fe7bfc-kube-api-access-sghmp\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.570906 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b98c972c-b350-44a1-a7c5-028914fe7bfc-etc-machine-id\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.570931 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq87n\" (UniqueName: \"kubernetes.io/projected/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-kube-api-access-fq87n\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.570960 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-scripts\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.570993 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-combined-ca-bundle\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.571025 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-db-sync-config-data\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.571068 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-db-sync-config-data\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.571118 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-combined-ca-bundle\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.571143 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-config-data\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.581087 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c63064a-a5f1-48da-b11c-eb76b04e3397-operator-scripts\") pod \"neutron-db-create-gfhmj\" (UID: \"8c63064a-a5f1-48da-b11c-eb76b04e3397\") " pod="openstack/neutron-db-create-gfhmj" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.581913 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c723-account-create-update-n6sg9"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.582107 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b98c972c-b350-44a1-a7c5-028914fe7bfc-etc-machine-id\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.598494 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.606000 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.630604 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-db-sync-config-data\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.630887 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-combined-ca-bundle\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.644733 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75cc7d9585-x8r8l"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.644774 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-scripts\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.645132 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sghmp\" (UniqueName: \"kubernetes.io/projected/b98c972c-b350-44a1-a7c5-028914fe7bfc-kube-api-access-sghmp\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.663383 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwnxp\" (UniqueName: \"kubernetes.io/projected/8c63064a-a5f1-48da-b11c-eb76b04e3397-kube-api-access-fwnxp\") pod \"neutron-db-create-gfhmj\" (UID: \"8c63064a-a5f1-48da-b11c-eb76b04e3397\") " pod="openstack/neutron-db-create-gfhmj" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677038 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-scripts\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677100 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd86f802-eef3-479a-870a-e34e7ce028ba-operator-scripts\") pod \"neutron-c723-account-create-update-n6sg9\" (UID: \"cd86f802-eef3-479a-870a-e34e7ce028ba\") " pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677151 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-config-data\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677190 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-combined-ca-bundle\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677270 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c163961-185c-418b-a0f5-a4d55b59f3ec-logs\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677351 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-config-data\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677383 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c163961-185c-418b-a0f5-a4d55b59f3ec-horizon-secret-key\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677469 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq87n\" (UniqueName: \"kubernetes.io/projected/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-kube-api-access-fq87n\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677522 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9sfn\" (UniqueName: \"kubernetes.io/projected/7c163961-185c-418b-a0f5-a4d55b59f3ec-kube-api-access-d9sfn\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677576 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-db-sync-config-data\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677601 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rhkv\" (UniqueName: \"kubernetes.io/projected/cd86f802-eef3-479a-870a-e34e7ce028ba-kube-api-access-8rhkv\") pod \"neutron-c723-account-create-update-n6sg9\" (UID: \"cd86f802-eef3-479a-870a-e34e7ce028ba\") " pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.684187 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-config-data\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.692215 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-db-sync-config-data\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.698017 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c723-account-create-update-n6sg9"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.698888 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-config-data\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.705268 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq87n\" (UniqueName: \"kubernetes.io/projected/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-kube-api-access-fq87n\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.708962 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-combined-ca-bundle\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.791619 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c163961-185c-418b-a0f5-a4d55b59f3ec-logs\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.791682 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c163961-185c-418b-a0f5-a4d55b59f3ec-horizon-secret-key\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.791755 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9sfn\" (UniqueName: \"kubernetes.io/projected/7c163961-185c-418b-a0f5-a4d55b59f3ec-kube-api-access-d9sfn\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.791786 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rhkv\" (UniqueName: \"kubernetes.io/projected/cd86f802-eef3-479a-870a-e34e7ce028ba-kube-api-access-8rhkv\") pod \"neutron-c723-account-create-update-n6sg9\" (UID: \"cd86f802-eef3-479a-870a-e34e7ce028ba\") " pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.791826 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-scripts\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.791846 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd86f802-eef3-479a-870a-e34e7ce028ba-operator-scripts\") pod \"neutron-c723-account-create-update-n6sg9\" (UID: \"cd86f802-eef3-479a-870a-e34e7ce028ba\") " pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.791869 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-config-data\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.793032 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-config-data\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.793241 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c163961-185c-418b-a0f5-a4d55b59f3ec-logs\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.794824 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-scripts\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.795495 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd86f802-eef3-479a-870a-e34e7ce028ba-operator-scripts\") pod \"neutron-c723-account-create-update-n6sg9\" (UID: \"cd86f802-eef3-479a-870a-e34e7ce028ba\") " pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.809941 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.810600 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerName="glance-log" containerID="cri-o://10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc" gracePeriod=30 Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.814745 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerName="glance-httpd" containerID="cri-o://eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1" gracePeriod=30 Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.818683 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c163961-185c-418b-a0f5-a4d55b59f3ec-horizon-secret-key\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.824095 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9sfn\" (UniqueName: \"kubernetes.io/projected/7c163961-185c-418b-a0f5-a4d55b59f3ec-kube-api-access-d9sfn\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.845833 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rhkv\" (UniqueName: \"kubernetes.io/projected/cd86f802-eef3-479a-870a-e34e7ce028ba-kube-api-access-8rhkv\") pod \"neutron-c723-account-create-update-n6sg9\" (UID: \"cd86f802-eef3-479a-870a-e34e7ce028ba\") " pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.857149 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b53da41-1ee4-4a06-b1ad-2f689fafd2be","Type":"ContainerStarted","Data":"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e"} Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.860864 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.860854076 podStartE2EDuration="4.860854076s" podCreationTimestamp="2026-02-19 05:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:51.856870356 +0000 UTC m=+1067.890192925" watchObservedRunningTime="2026-02-19 05:42:51.860854076 +0000 UTC m=+1067.894176645" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.865939 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-855998b9f9-lkm6w" event={"ID":"f06c7918-a7b3-4041-bd16-63a73e47bf13","Type":"ContainerStarted","Data":"fe94aacdf3b8c844dc9abab1e415854e4e47ea212d07513a67fc4c1411f63f3f"} Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.880974 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.941445 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.952379 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gfhmj" Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.029585 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:42:52 crc kubenswrapper[5012]: E0219 05:42:52.112077 5012 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13bff5bd_2005_4cce_986a_5bcd2d5a396c.slice/crio-10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13bff5bd_2005_4cce_986a_5bcd2d5a396c.slice/crio-conmon-eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1.scope\": RecentStats: unable to find data in memory cache]" Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.699438 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75cc7d9585-x8r8l"] Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.863063 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.881497 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-cdj57"] Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.887464 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75cc7d9585-x8r8l" event={"ID":"7c163961-185c-418b-a0f5-a4d55b59f3ec","Type":"ContainerStarted","Data":"7cfa7cf48e4edcddab8aec2d0bfb0aeea8557ac2316f0d4b2e00c1aa2310cba1"} Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.955165 5012 generic.go:334] "Generic (PLEG): container finished" podID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerID="eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1" exitCode=143 Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.955205 5012 generic.go:334] "Generic (PLEG): container finished" podID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerID="10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc" exitCode=143 Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.955348 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.955397 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"13bff5bd-2005-4cce-986a-5bcd2d5a396c","Type":"ContainerDied","Data":"eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1"} Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.955478 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"13bff5bd-2005-4cce-986a-5bcd2d5a396c","Type":"ContainerDied","Data":"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc"} Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.955505 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"13bff5bd-2005-4cce-986a-5bcd2d5a396c","Type":"ContainerDied","Data":"15902dc00744af1a937cdb4358bfbecf5055d748ea34206757e15f3417243c32"} Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.955530 5012 scope.go:117] "RemoveContainer" containerID="eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1" Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.973968 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b53da41-1ee4-4a06-b1ad-2f689fafd2be","Type":"ContainerStarted","Data":"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d"} Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.974444 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerName="glance-log" containerID="cri-o://81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e" gracePeriod=30 Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.974864 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerName="glance-httpd" containerID="cri-o://7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d" gracePeriod=30 Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.022341 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-httpd-run\") pod \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.022431 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-combined-ca-bundle\") pod \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.022457 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.022507 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-scripts\") pod \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.022566 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-logs\") pod \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.022620 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-config-data\") pod \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.022726 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-public-tls-certs\") pod \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.022783 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhfdv\" (UniqueName: \"kubernetes.io/projected/13bff5bd-2005-4cce-986a-5bcd2d5a396c-kube-api-access-zhfdv\") pod \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.030391 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "13bff5bd-2005-4cce-986a-5bcd2d5a396c" (UID: "13bff5bd-2005-4cce-986a-5bcd2d5a396c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.030484 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xj7dw"] Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.036642 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-logs" (OuterVolumeSpecName: "logs") pod "13bff5bd-2005-4cce-986a-5bcd2d5a396c" (UID: "13bff5bd-2005-4cce-986a-5bcd2d5a396c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.039610 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "13bff5bd-2005-4cce-986a-5bcd2d5a396c" (UID: "13bff5bd-2005-4cce-986a-5bcd2d5a396c"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.048406 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-scripts" (OuterVolumeSpecName: "scripts") pod "13bff5bd-2005-4cce-986a-5bcd2d5a396c" (UID: "13bff5bd-2005-4cce-986a-5bcd2d5a396c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.049127 5012 scope.go:117] "RemoveContainer" containerID="10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.055987 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.055951709 podStartE2EDuration="6.055951709s" podCreationTimestamp="2026-02-19 05:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:53.007178213 +0000 UTC m=+1069.040500782" watchObservedRunningTime="2026-02-19 05:42:53.055951709 +0000 UTC m=+1069.089274278" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.064553 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13bff5bd-2005-4cce-986a-5bcd2d5a396c-kube-api-access-zhfdv" (OuterVolumeSpecName: "kube-api-access-zhfdv") pod "13bff5bd-2005-4cce-986a-5bcd2d5a396c" (UID: "13bff5bd-2005-4cce-986a-5bcd2d5a396c"). InnerVolumeSpecName "kube-api-access-zhfdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.074881 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13bff5bd-2005-4cce-986a-5bcd2d5a396c" (UID: "13bff5bd-2005-4cce-986a-5bcd2d5a396c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.088453 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c723-account-create-update-n6sg9"] Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.099608 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gfhmj"] Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.110199 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-config-data" (OuterVolumeSpecName: "config-data") pod "13bff5bd-2005-4cce-986a-5bcd2d5a396c" (UID: "13bff5bd-2005-4cce-986a-5bcd2d5a396c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.120178 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "13bff5bd-2005-4cce-986a-5bcd2d5a396c" (UID: "13bff5bd-2005-4cce-986a-5bcd2d5a396c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.134813 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhfdv\" (UniqueName: \"kubernetes.io/projected/13bff5bd-2005-4cce-986a-5bcd2d5a396c-kube-api-access-zhfdv\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.134855 5012 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.134866 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.134904 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.134914 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.134922 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.134930 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.134938 5012 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.168640 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.237441 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.360705 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.394711 5012 scope.go:117] "RemoveContainer" containerID="eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.403948 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:53 crc kubenswrapper[5012]: E0219 05:42:53.414179 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1\": container with ID starting with eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1 not found: ID does not exist" containerID="eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.414258 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1"} err="failed to get container status \"eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1\": rpc error: code = NotFound desc = could not find container \"eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1\": container with ID starting with eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1 not found: ID does not exist" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.414296 5012 scope.go:117] "RemoveContainer" containerID="10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc" Feb 19 05:42:53 crc kubenswrapper[5012]: E0219 05:42:53.414768 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc\": container with ID starting with 10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc not found: ID does not exist" containerID="10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.414796 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc"} err="failed to get container status \"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc\": rpc error: code = NotFound desc = could not find container \"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc\": container with ID starting with 10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc not found: ID does not exist" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.414810 5012 scope.go:117] "RemoveContainer" containerID="eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.415102 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1"} err="failed to get container status \"eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1\": rpc error: code = NotFound desc = could not find container \"eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1\": container with ID starting with eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1 not found: ID does not exist" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.415130 5012 scope.go:117] "RemoveContainer" containerID="10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.415419 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc"} err="failed to get container status \"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc\": rpc error: code = NotFound desc = could not find container \"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc\": container with ID starting with 10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc not found: ID does not exist" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.421037 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:53 crc kubenswrapper[5012]: E0219 05:42:53.421824 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerName="glance-log" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.421839 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerName="glance-log" Feb 19 05:42:53 crc kubenswrapper[5012]: E0219 05:42:53.421880 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerName="glance-httpd" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.421887 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerName="glance-httpd" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.422062 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerName="glance-log" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.422077 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerName="glance-httpd" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.424070 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.429401 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.432012 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.432287 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.559954 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.560027 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-config-data\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.560055 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-scripts\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.560166 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.560359 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.560436 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhpwq\" (UniqueName: \"kubernetes.io/projected/88a90a35-c893-4857-9f8b-9a405c96c044-kube-api-access-fhpwq\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.560481 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-logs\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.560589 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.662695 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.662746 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-config-data\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.662762 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-scripts\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.662796 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.662844 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.662873 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhpwq\" (UniqueName: \"kubernetes.io/projected/88a90a35-c893-4857-9f8b-9a405c96c044-kube-api-access-fhpwq\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.662897 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-logs\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.662929 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.664084 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.664347 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.664703 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-logs\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.672206 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.681111 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.681600 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-config-data\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.686179 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-scripts\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.691858 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhpwq\" (UniqueName: \"kubernetes.io/projected/88a90a35-c893-4857-9f8b-9a405c96c044-kube-api-access-fhpwq\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.721516 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.759158 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.833155 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.968950 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.969384 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwgzm\" (UniqueName: \"kubernetes.io/projected/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-kube-api-access-hwgzm\") pod \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.969492 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-logs\") pod \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.969534 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-internal-tls-certs\") pod \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.969567 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-combined-ca-bundle\") pod \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.969619 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-config-data\") pod \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.969640 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-httpd-run\") pod \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.969683 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-scripts\") pod \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.970653 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-logs" (OuterVolumeSpecName: "logs") pod "0b53da41-1ee4-4a06-b1ad-2f689fafd2be" (UID: "0b53da41-1ee4-4a06-b1ad-2f689fafd2be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.973536 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "0b53da41-1ee4-4a06-b1ad-2f689fafd2be" (UID: "0b53da41-1ee4-4a06-b1ad-2f689fafd2be"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.975710 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-scripts" (OuterVolumeSpecName: "scripts") pod "0b53da41-1ee4-4a06-b1ad-2f689fafd2be" (UID: "0b53da41-1ee4-4a06-b1ad-2f689fafd2be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.983416 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-kube-api-access-hwgzm" (OuterVolumeSpecName: "kube-api-access-hwgzm") pod "0b53da41-1ee4-4a06-b1ad-2f689fafd2be" (UID: "0b53da41-1ee4-4a06-b1ad-2f689fafd2be"). InnerVolumeSpecName "kube-api-access-hwgzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.985815 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0b53da41-1ee4-4a06-b1ad-2f689fafd2be" (UID: "0b53da41-1ee4-4a06-b1ad-2f689fafd2be"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.998157 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.998191 5012 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.998200 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.998228 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.998238 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwgzm\" (UniqueName: \"kubernetes.io/projected/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-kube-api-access-hwgzm\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.019934 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.026837 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xj7dw" event={"ID":"b98c972c-b350-44a1-a7c5-028914fe7bfc","Type":"ContainerStarted","Data":"a84681fa37d45c4925f780e8954023bd4c066ed1cbb2bb7d3fe3e2f3209e4c8b"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.036810 5012 generic.go:334] "Generic (PLEG): container finished" podID="8c63064a-a5f1-48da-b11c-eb76b04e3397" containerID="cfe7e53a61fb5256f22c4a39c4ac5b0bf7cc2f1ccf28f2709694c6b3715b8d0c" exitCode=0 Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.037113 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gfhmj" event={"ID":"8c63064a-a5f1-48da-b11c-eb76b04e3397","Type":"ContainerDied","Data":"cfe7e53a61fb5256f22c4a39c4ac5b0bf7cc2f1ccf28f2709694c6b3715b8d0c"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.037171 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gfhmj" event={"ID":"8c63064a-a5f1-48da-b11c-eb76b04e3397","Type":"ContainerStarted","Data":"92bd8d00206aa24501f4ff93ff9ed472e42ea7a5e3069017ed2ab2dec8bfa7db"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.039622 5012 generic.go:334] "Generic (PLEG): container finished" podID="cd86f802-eef3-479a-870a-e34e7ce028ba" containerID="a20a059012a07fc06fff87153b7822f281e937cfbfdfbad5c4e4671c1d2bfb30" exitCode=0 Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.040084 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c723-account-create-update-n6sg9" event={"ID":"cd86f802-eef3-479a-870a-e34e7ce028ba","Type":"ContainerDied","Data":"a20a059012a07fc06fff87153b7822f281e937cfbfdfbad5c4e4671c1d2bfb30"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.040123 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c723-account-create-update-n6sg9" event={"ID":"cd86f802-eef3-479a-870a-e34e7ce028ba","Type":"ContainerStarted","Data":"ffe1af5d39043f27e616cee7a2931ec77e266d5fd98f4e7902bda884efb795d5"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.041711 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-cdj57" event={"ID":"89f14c4e-147e-4a05-a8d9-63b93aaad4a4","Type":"ContainerStarted","Data":"416ab3f77df9ec5ea4c8eb669473dd003dd38b711f8cc41ad525b40979b07e19"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.046690 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b53da41-1ee4-4a06-b1ad-2f689fafd2be" (UID: "0b53da41-1ee4-4a06-b1ad-2f689fafd2be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.056178 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0b53da41-1ee4-4a06-b1ad-2f689fafd2be" (UID: "0b53da41-1ee4-4a06-b1ad-2f689fafd2be"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.073571 5012 generic.go:334] "Generic (PLEG): container finished" podID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerID="7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d" exitCode=0 Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.073984 5012 generic.go:334] "Generic (PLEG): container finished" podID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerID="81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e" exitCode=143 Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.073661 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b53da41-1ee4-4a06-b1ad-2f689fafd2be","Type":"ContainerDied","Data":"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.074058 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b53da41-1ee4-4a06-b1ad-2f689fafd2be","Type":"ContainerDied","Data":"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.074088 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b53da41-1ee4-4a06-b1ad-2f689fafd2be","Type":"ContainerDied","Data":"d5e045aceaaad28fe4dee87429ebb210f9d9b506f56cee1ed148bccaa4202c45"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.073702 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.074107 5012 scope.go:117] "RemoveContainer" containerID="7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.100163 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.100211 5012 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.100226 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.100986 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-config-data" (OuterVolumeSpecName: "config-data") pod "0b53da41-1ee4-4a06-b1ad-2f689fafd2be" (UID: "0b53da41-1ee4-4a06-b1ad-2f689fafd2be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.196660 5012 scope.go:117] "RemoveContainer" containerID="81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.203612 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.283989 5012 scope.go:117] "RemoveContainer" containerID="7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d" Feb 19 05:42:54 crc kubenswrapper[5012]: E0219 05:42:54.285199 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d\": container with ID starting with 7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d not found: ID does not exist" containerID="7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.285241 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d"} err="failed to get container status \"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d\": rpc error: code = NotFound desc = could not find container \"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d\": container with ID starting with 7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d not found: ID does not exist" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.285266 5012 scope.go:117] "RemoveContainer" containerID="81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e" Feb 19 05:42:54 crc kubenswrapper[5012]: E0219 05:42:54.286784 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e\": container with ID starting with 81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e not found: ID does not exist" containerID="81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.286811 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e"} err="failed to get container status \"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e\": rpc error: code = NotFound desc = could not find container \"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e\": container with ID starting with 81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e not found: ID does not exist" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.286856 5012 scope.go:117] "RemoveContainer" containerID="7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.287850 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d"} err="failed to get container status \"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d\": rpc error: code = NotFound desc = could not find container \"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d\": container with ID starting with 7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d not found: ID does not exist" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.287890 5012 scope.go:117] "RemoveContainer" containerID="81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.288913 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e"} err="failed to get container status \"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e\": rpc error: code = NotFound desc = could not find container \"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e\": container with ID starting with 81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e not found: ID does not exist" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.442242 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.464361 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.478942 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:54 crc kubenswrapper[5012]: E0219 05:42:54.487753 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerName="glance-httpd" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.487789 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerName="glance-httpd" Feb 19 05:42:54 crc kubenswrapper[5012]: E0219 05:42:54.487839 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerName="glance-log" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.487847 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerName="glance-log" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.488110 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerName="glance-log" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.488142 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerName="glance-httpd" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.489159 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.493431 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.496726 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.508356 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:54 crc kubenswrapper[5012]: W0219 05:42:54.560769 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88a90a35_c893_4857_9f8b_9a405c96c044.slice/crio-ecf3b5a274f4488d8ab70a5b1867720561b8843e1c2bc81491a836b7a8a78bb9 WatchSource:0}: Error finding container ecf3b5a274f4488d8ab70a5b1867720561b8843e1c2bc81491a836b7a8a78bb9: Status 404 returned error can't find the container with id ecf3b5a274f4488d8ab70a5b1867720561b8843e1c2bc81491a836b7a8a78bb9 Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.568366 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.621196 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.621250 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.621288 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.621330 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.621398 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vglb8\" (UniqueName: \"kubernetes.io/projected/eb53e400-f5d7-4c86-9aab-eda61301a4cf-kube-api-access-vglb8\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.621415 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.621441 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.621466 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728383 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728473 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728547 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728570 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728609 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728628 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728698 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vglb8\" (UniqueName: \"kubernetes.io/projected/eb53e400-f5d7-4c86-9aab-eda61301a4cf-kube-api-access-vglb8\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728720 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728907 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.735780 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.746957 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.753430 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.758431 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.759123 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.764338 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.782234 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" path="/var/lib/kubelet/pods/0b53da41-1ee4-4a06-b1ad-2f689fafd2be/volumes" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.783052 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" path="/var/lib/kubelet/pods/13bff5bd-2005-4cce-986a-5bcd2d5a396c/volumes" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.802556 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.805086 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vglb8\" (UniqueName: \"kubernetes.io/projected/eb53e400-f5d7-4c86-9aab-eda61301a4cf-kube-api-access-vglb8\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:55 crc kubenswrapper[5012]: I0219 05:42:55.097426 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88a90a35-c893-4857-9f8b-9a405c96c044","Type":"ContainerStarted","Data":"ecf3b5a274f4488d8ab70a5b1867720561b8843e1c2bc81491a836b7a8a78bb9"} Feb 19 05:42:55 crc kubenswrapper[5012]: I0219 05:42:55.113329 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:42:55 crc kubenswrapper[5012]: I0219 05:42:55.891131 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-855998b9f9-lkm6w"] Feb 19 05:42:55 crc kubenswrapper[5012]: I0219 05:42:55.911744 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:55 crc kubenswrapper[5012]: I0219 05:42:55.958522 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cdcb467fb-8tvnz"] Feb 19 05:42:55 crc kubenswrapper[5012]: I0219 05:42:55.969354 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:55 crc kubenswrapper[5012]: I0219 05:42:55.972028 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 19 05:42:55 crc kubenswrapper[5012]: I0219 05:42:55.975255 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cdcb467fb-8tvnz"] Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.021537 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.074572 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-horizon-secret-key\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.074624 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c937bbe-f068-4e5b-81ad-9455104062da-scripts\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.074651 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-horizon-tls-certs\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.074692 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c937bbe-f068-4e5b-81ad-9455104062da-logs\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.074714 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqq6d\" (UniqueName: \"kubernetes.io/projected/6c937bbe-f068-4e5b-81ad-9455104062da-kube-api-access-xqq6d\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.074757 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-combined-ca-bundle\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.074775 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c937bbe-f068-4e5b-81ad-9455104062da-config-data\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.118975 5012 generic.go:334] "Generic (PLEG): container finished" podID="25558255-c27f-4f6e-a838-675ae8ec77b6" containerID="12a292fc1b8e4523fdc0fb30ca3590a1b6b6f0c70c3e42e076f92a7b213241f2" exitCode=0 Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.119019 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kqsgz" event={"ID":"25558255-c27f-4f6e-a838-675ae8ec77b6","Type":"ContainerDied","Data":"12a292fc1b8e4523fdc0fb30ca3590a1b6b6f0c70c3e42e076f92a7b213241f2"} Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.178771 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c937bbe-f068-4e5b-81ad-9455104062da-logs\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.178843 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqq6d\" (UniqueName: \"kubernetes.io/projected/6c937bbe-f068-4e5b-81ad-9455104062da-kube-api-access-xqq6d\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.178898 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-combined-ca-bundle\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.178924 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c937bbe-f068-4e5b-81ad-9455104062da-config-data\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.179002 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-horizon-secret-key\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.179030 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c937bbe-f068-4e5b-81ad-9455104062da-scripts\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.179054 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-horizon-tls-certs\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.180167 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c937bbe-f068-4e5b-81ad-9455104062da-logs\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.181316 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c937bbe-f068-4e5b-81ad-9455104062da-config-data\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.181537 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c937bbe-f068-4e5b-81ad-9455104062da-scripts\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.195798 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-horizon-secret-key\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.196812 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-combined-ca-bundle\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.204323 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-horizon-tls-certs\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.210508 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqq6d\" (UniqueName: \"kubernetes.io/projected/6c937bbe-f068-4e5b-81ad-9455104062da-kube-api-access-xqq6d\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.307951 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:57 crc kubenswrapper[5012]: I0219 05:42:57.812583 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:57 crc kubenswrapper[5012]: I0219 05:42:57.882360 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bf9dcd95-lzm7b"] Feb 19 05:42:57 crc kubenswrapper[5012]: I0219 05:42:57.882585 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="dnsmasq-dns" containerID="cri-o://7ff9e9710973d65273f4c7d1b2b07184b8147f2ccbf37eac212553af6a1fa77e" gracePeriod=10 Feb 19 05:42:58 crc kubenswrapper[5012]: I0219 05:42:58.170891 5012 generic.go:334] "Generic (PLEG): container finished" podID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerID="7ff9e9710973d65273f4c7d1b2b07184b8147f2ccbf37eac212553af6a1fa77e" exitCode=0 Feb 19 05:42:58 crc kubenswrapper[5012]: I0219 05:42:58.170939 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" event={"ID":"f22ec0c5-41a9-4f36-adb0-405e5a26d209","Type":"ContainerDied","Data":"7ff9e9710973d65273f4c7d1b2b07184b8147f2ccbf37eac212553af6a1fa77e"} Feb 19 05:43:02 crc kubenswrapper[5012]: I0219 05:43:02.859895 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.500444 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.506575 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.516630 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gfhmj" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689256 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd86f802-eef3-479a-870a-e34e7ce028ba-operator-scripts\") pod \"cd86f802-eef3-479a-870a-e34e7ce028ba\" (UID: \"cd86f802-eef3-479a-870a-e34e7ce028ba\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689342 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rhkv\" (UniqueName: \"kubernetes.io/projected/cd86f802-eef3-479a-870a-e34e7ce028ba-kube-api-access-8rhkv\") pod \"cd86f802-eef3-479a-870a-e34e7ce028ba\" (UID: \"cd86f802-eef3-479a-870a-e34e7ce028ba\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689448 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7hsh\" (UniqueName: \"kubernetes.io/projected/25558255-c27f-4f6e-a838-675ae8ec77b6-kube-api-access-d7hsh\") pod \"25558255-c27f-4f6e-a838-675ae8ec77b6\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689640 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data\") pod \"25558255-c27f-4f6e-a838-675ae8ec77b6\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689676 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-credential-keys\") pod \"25558255-c27f-4f6e-a838-675ae8ec77b6\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689778 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwnxp\" (UniqueName: \"kubernetes.io/projected/8c63064a-a5f1-48da-b11c-eb76b04e3397-kube-api-access-fwnxp\") pod \"8c63064a-a5f1-48da-b11c-eb76b04e3397\" (UID: \"8c63064a-a5f1-48da-b11c-eb76b04e3397\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689860 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-combined-ca-bundle\") pod \"25558255-c27f-4f6e-a838-675ae8ec77b6\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689888 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-fernet-keys\") pod \"25558255-c27f-4f6e-a838-675ae8ec77b6\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689948 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c63064a-a5f1-48da-b11c-eb76b04e3397-operator-scripts\") pod \"8c63064a-a5f1-48da-b11c-eb76b04e3397\" (UID: \"8c63064a-a5f1-48da-b11c-eb76b04e3397\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.690013 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-scripts\") pod \"25558255-c27f-4f6e-a838-675ae8ec77b6\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.690681 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd86f802-eef3-479a-870a-e34e7ce028ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd86f802-eef3-479a-870a-e34e7ce028ba" (UID: "cd86f802-eef3-479a-870a-e34e7ce028ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.696020 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c63064a-a5f1-48da-b11c-eb76b04e3397-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c63064a-a5f1-48da-b11c-eb76b04e3397" (UID: "8c63064a-a5f1-48da-b11c-eb76b04e3397"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.696357 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "25558255-c27f-4f6e-a838-675ae8ec77b6" (UID: "25558255-c27f-4f6e-a838-675ae8ec77b6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.696684 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "25558255-c27f-4f6e-a838-675ae8ec77b6" (UID: "25558255-c27f-4f6e-a838-675ae8ec77b6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.698165 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25558255-c27f-4f6e-a838-675ae8ec77b6-kube-api-access-d7hsh" (OuterVolumeSpecName: "kube-api-access-d7hsh") pod "25558255-c27f-4f6e-a838-675ae8ec77b6" (UID: "25558255-c27f-4f6e-a838-675ae8ec77b6"). InnerVolumeSpecName "kube-api-access-d7hsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.712867 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-scripts" (OuterVolumeSpecName: "scripts") pod "25558255-c27f-4f6e-a838-675ae8ec77b6" (UID: "25558255-c27f-4f6e-a838-675ae8ec77b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.712930 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c63064a-a5f1-48da-b11c-eb76b04e3397-kube-api-access-fwnxp" (OuterVolumeSpecName: "kube-api-access-fwnxp") pod "8c63064a-a5f1-48da-b11c-eb76b04e3397" (UID: "8c63064a-a5f1-48da-b11c-eb76b04e3397"). InnerVolumeSpecName "kube-api-access-fwnxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: E0219 05:43:07.716609 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data podName:25558255-c27f-4f6e-a838-675ae8ec77b6 nodeName:}" failed. No retries permitted until 2026-02-19 05:43:08.216577245 +0000 UTC m=+1084.249899814 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data") pod "25558255-c27f-4f6e-a838-675ae8ec77b6" (UID: "25558255-c27f-4f6e-a838-675ae8ec77b6") : error deleting /var/lib/kubelet/pods/25558255-c27f-4f6e-a838-675ae8ec77b6/volume-subpaths: remove /var/lib/kubelet/pods/25558255-c27f-4f6e-a838-675ae8ec77b6/volume-subpaths: no such file or directory Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.721232 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25558255-c27f-4f6e-a838-675ae8ec77b6" (UID: "25558255-c27f-4f6e-a838-675ae8ec77b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.726597 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd86f802-eef3-479a-870a-e34e7ce028ba-kube-api-access-8rhkv" (OuterVolumeSpecName: "kube-api-access-8rhkv") pod "cd86f802-eef3-479a-870a-e34e7ce028ba" (UID: "cd86f802-eef3-479a-870a-e34e7ce028ba"). InnerVolumeSpecName "kube-api-access-8rhkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792764 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwnxp\" (UniqueName: \"kubernetes.io/projected/8c63064a-a5f1-48da-b11c-eb76b04e3397-kube-api-access-fwnxp\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792804 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792814 5012 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792824 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c63064a-a5f1-48da-b11c-eb76b04e3397-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792835 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792844 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd86f802-eef3-479a-870a-e34e7ce028ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792852 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rhkv\" (UniqueName: \"kubernetes.io/projected/cd86f802-eef3-479a-870a-e34e7ce028ba-kube-api-access-8rhkv\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792860 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7hsh\" (UniqueName: \"kubernetes.io/projected/25558255-c27f-4f6e-a838-675ae8ec77b6-kube-api-access-d7hsh\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792868 5012 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.286999 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.286996 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c723-account-create-update-n6sg9" event={"ID":"cd86f802-eef3-479a-870a-e34e7ce028ba","Type":"ContainerDied","Data":"ffe1af5d39043f27e616cee7a2931ec77e266d5fd98f4e7902bda884efb795d5"} Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.287571 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffe1af5d39043f27e616cee7a2931ec77e266d5fd98f4e7902bda884efb795d5" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.289728 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.289746 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kqsgz" event={"ID":"25558255-c27f-4f6e-a838-675ae8ec77b6","Type":"ContainerDied","Data":"1512f5c39e8f19ace9b3040d9e0b560368c4be80736d22a497fc8bc26c80da61"} Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.289786 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1512f5c39e8f19ace9b3040d9e0b560368c4be80736d22a497fc8bc26c80da61" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.292259 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88a90a35-c893-4857-9f8b-9a405c96c044","Type":"ContainerStarted","Data":"ac6da210c5bb5413e246a1f52c04e83d2fc86480ea465052b466d052aefc08b7"} Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.294480 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gfhmj" event={"ID":"8c63064a-a5f1-48da-b11c-eb76b04e3397","Type":"ContainerDied","Data":"92bd8d00206aa24501f4ff93ff9ed472e42ea7a5e3069017ed2ab2dec8bfa7db"} Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.294538 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92bd8d00206aa24501f4ff93ff9ed472e42ea7a5e3069017ed2ab2dec8bfa7db" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.294619 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gfhmj" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.303820 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data\") pod \"25558255-c27f-4f6e-a838-675ae8ec77b6\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.309547 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data" (OuterVolumeSpecName: "config-data") pod "25558255-c27f-4f6e-a838-675ae8ec77b6" (UID: "25558255-c27f-4f6e-a838-675ae8ec77b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.405974 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.640363 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kqsgz"] Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.674109 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kqsgz"] Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.724512 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25558255-c27f-4f6e-a838-675ae8ec77b6" path="/var/lib/kubelet/pods/25558255-c27f-4f6e-a838-675ae8ec77b6/volumes" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.729725 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zf89d"] Feb 19 05:43:08 crc kubenswrapper[5012]: E0219 05:43:08.730061 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25558255-c27f-4f6e-a838-675ae8ec77b6" containerName="keystone-bootstrap" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.730077 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="25558255-c27f-4f6e-a838-675ae8ec77b6" containerName="keystone-bootstrap" Feb 19 05:43:08 crc kubenswrapper[5012]: E0219 05:43:08.730112 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd86f802-eef3-479a-870a-e34e7ce028ba" containerName="mariadb-account-create-update" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.730119 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd86f802-eef3-479a-870a-e34e7ce028ba" containerName="mariadb-account-create-update" Feb 19 05:43:08 crc kubenswrapper[5012]: E0219 05:43:08.730133 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c63064a-a5f1-48da-b11c-eb76b04e3397" containerName="mariadb-database-create" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.730140 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c63064a-a5f1-48da-b11c-eb76b04e3397" containerName="mariadb-database-create" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.730366 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c63064a-a5f1-48da-b11c-eb76b04e3397" containerName="mariadb-database-create" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.730392 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="25558255-c27f-4f6e-a838-675ae8ec77b6" containerName="keystone-bootstrap" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.730403 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd86f802-eef3-479a-870a-e34e7ce028ba" containerName="mariadb-account-create-update" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.730990 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.737709 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.737874 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.738003 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.738155 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dhq72" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.742613 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.796660 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zf89d"] Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.927595 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-config-data\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.928337 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-fernet-keys\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.928607 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-credential-keys\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.929314 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nknw\" (UniqueName: \"kubernetes.io/projected/555a6373-5cdf-490e-b6ea-b0fb55425d28-kube-api-access-2nknw\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.929411 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.929473 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-scripts\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.032411 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.032482 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-scripts\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.032559 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-config-data\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.032603 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-fernet-keys\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.032666 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-credential-keys\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.032775 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nknw\" (UniqueName: \"kubernetes.io/projected/555a6373-5cdf-490e-b6ea-b0fb55425d28-kube-api-access-2nknw\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.038226 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-fernet-keys\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.038835 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.039739 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-config-data\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.041828 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-scripts\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.042082 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-credential-keys\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.049252 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nknw\" (UniqueName: \"kubernetes.io/projected/555a6373-5cdf-490e-b6ea-b0fb55425d28-kube-api-access-2nknw\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.078643 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.680552 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-px7xk"] Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.682124 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.692416 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-combined-ca-bundle\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.692551 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s87kc\" (UniqueName: \"kubernetes.io/projected/787f8a71-dee4-40d2-b33b-85bcfc58f921-kube-api-access-s87kc\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.692582 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-config\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.693036 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-px7xk"] Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.693613 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rtrj8" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.693695 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.693963 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.795033 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-combined-ca-bundle\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.795397 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s87kc\" (UniqueName: \"kubernetes.io/projected/787f8a71-dee4-40d2-b33b-85bcfc58f921-kube-api-access-s87kc\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.795458 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-config\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.806814 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-config\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.813003 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-combined-ca-bundle\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.817552 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s87kc\" (UniqueName: \"kubernetes.io/projected/787f8a71-dee4-40d2-b33b-85bcfc58f921-kube-api-access-s87kc\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:12 crc kubenswrapper[5012]: I0219 05:43:12.002589 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:12 crc kubenswrapper[5012]: I0219 05:43:12.861016 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Feb 19 05:43:14 crc kubenswrapper[5012]: I0219 05:43:14.431997 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:43:14 crc kubenswrapper[5012]: I0219 05:43:14.432660 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:43:15 crc kubenswrapper[5012]: E0219 05:43:15.587021 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Feb 19 05:43:15 crc kubenswrapper[5012]: E0219 05:43:15.587685 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Feb 19 05:43:15 crc kubenswrapper[5012]: E0219 05:43:15.587912 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-master-centos10/openstack-horizon:current,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf4hcdh54h585h5b4hfbh667h5dch5d4h85hf9h5dh8dh64hddh676hdfh575h56dh699h5cbhd6hfdh589h5bdh5f6hddh569h549h87h59dh557q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdkkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-855998b9f9-lkm6w_openstack(f06c7918-a7b3-4041-bd16-63a73e47bf13): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:43:15 crc kubenswrapper[5012]: E0219 05:43:15.594740 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-horizon:current\\\"\"]" pod="openstack/horizon-855998b9f9-lkm6w" podUID="f06c7918-a7b3-4041-bd16-63a73e47bf13" Feb 19 05:43:15 crc kubenswrapper[5012]: E0219 05:43:15.607608 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Feb 19 05:43:15 crc kubenswrapper[5012]: E0219 05:43:15.607700 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Feb 19 05:43:15 crc kubenswrapper[5012]: E0219 05:43:15.607926 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-master-centos10/openstack-horizon:current,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67bh5ch5ffh9bh7h67bh57h59ch5c7h59bh5b9h5ffh647h694h5ffhf5h5fh677hbh575h587hcfh589h66bh55dh594h547h77h68fh695h559h566q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9stt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-56f66dc579-dpndj_openstack(cb1825de-9782-4820-96aa-d4909a0f7820): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:43:15 crc kubenswrapper[5012]: E0219 05:43:15.612334 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-horizon:current\\\"\"]" pod="openstack/horizon-56f66dc579-dpndj" podUID="cb1825de-9782-4820-96aa-d4909a0f7820" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.680512 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.807330 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-config\") pod \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.807667 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-dns-svc\") pod \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.807761 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-nb\") pod \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.807833 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-sb\") pod \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.807874 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25brm\" (UniqueName: \"kubernetes.io/projected/f22ec0c5-41a9-4f36-adb0-405e5a26d209-kube-api-access-25brm\") pod \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.815987 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22ec0c5-41a9-4f36-adb0-405e5a26d209-kube-api-access-25brm" (OuterVolumeSpecName: "kube-api-access-25brm") pod "f22ec0c5-41a9-4f36-adb0-405e5a26d209" (UID: "f22ec0c5-41a9-4f36-adb0-405e5a26d209"). InnerVolumeSpecName "kube-api-access-25brm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.866435 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-config" (OuterVolumeSpecName: "config") pod "f22ec0c5-41a9-4f36-adb0-405e5a26d209" (UID: "f22ec0c5-41a9-4f36-adb0-405e5a26d209"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.867749 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f22ec0c5-41a9-4f36-adb0-405e5a26d209" (UID: "f22ec0c5-41a9-4f36-adb0-405e5a26d209"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.874474 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f22ec0c5-41a9-4f36-adb0-405e5a26d209" (UID: "f22ec0c5-41a9-4f36-adb0-405e5a26d209"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.896402 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f22ec0c5-41a9-4f36-adb0-405e5a26d209" (UID: "f22ec0c5-41a9-4f36-adb0-405e5a26d209"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.912337 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.912402 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.912421 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25brm\" (UniqueName: \"kubernetes.io/projected/f22ec0c5-41a9-4f36-adb0-405e5a26d209-kube-api-access-25brm\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.912467 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.912483 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:16 crc kubenswrapper[5012]: I0219 05:43:16.403454 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:43:16 crc kubenswrapper[5012]: I0219 05:43:16.403923 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" event={"ID":"f22ec0c5-41a9-4f36-adb0-405e5a26d209","Type":"ContainerDied","Data":"6bd9704878ce796ee545aeab88709706e42c6cb9f878bf8b26a1785cb4cf93bf"} Feb 19 05:43:16 crc kubenswrapper[5012]: I0219 05:43:16.404488 5012 scope.go:117] "RemoveContainer" containerID="7ff9e9710973d65273f4c7d1b2b07184b8147f2ccbf37eac212553af6a1fa77e" Feb 19 05:43:16 crc kubenswrapper[5012]: I0219 05:43:16.493939 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bf9dcd95-lzm7b"] Feb 19 05:43:16 crc kubenswrapper[5012]: I0219 05:43:16.502895 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bf9dcd95-lzm7b"] Feb 19 05:43:16 crc kubenswrapper[5012]: I0219 05:43:16.730246 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" path="/var/lib/kubelet/pods/f22ec0c5-41a9-4f36-adb0-405e5a26d209/volumes" Feb 19 05:43:17 crc kubenswrapper[5012]: I0219 05:43:17.862461 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Feb 19 05:43:24 crc kubenswrapper[5012]: E0219 05:43:24.831741 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-watcher-api:current" Feb 19 05:43:24 crc kubenswrapper[5012]: E0219 05:43:24.832812 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-watcher-api:current" Feb 19 05:43:24 crc kubenswrapper[5012]: E0219 05:43:24.833430 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-watcher-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fq87n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-cdj57_openstack(89f14c4e-147e-4a05-a8d9-63b93aaad4a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:43:24 crc kubenswrapper[5012]: E0219 05:43:24.834664 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-cdj57" podUID="89f14c4e-147e-4a05-a8d9-63b93aaad4a4" Feb 19 05:43:25 crc kubenswrapper[5012]: E0219 05:43:25.307187 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-barbican-api:current" Feb 19 05:43:25 crc kubenswrapper[5012]: E0219 05:43:25.307236 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-barbican-api:current" Feb 19 05:43:25 crc kubenswrapper[5012]: E0219 05:43:25.307409 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-barbican-api:current,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z95mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-jzclm_openstack(a34a979c-9102-471f-9678-048fd5198cb8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:43:25 crc kubenswrapper[5012]: E0219 05:43:25.308623 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-jzclm" podUID="a34a979c-9102-471f-9678-048fd5198cb8" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.441242 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.450832 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.535509 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f66dc579-dpndj" event={"ID":"cb1825de-9782-4820-96aa-d4909a0f7820","Type":"ContainerDied","Data":"9f3f55ad97ef9c22bb96987a2cbaf0c250aabbcc040a9c414cfd11f3987fe5ea"} Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.535728 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.540424 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-855998b9f9-lkm6w" event={"ID":"f06c7918-a7b3-4041-bd16-63a73e47bf13","Type":"ContainerDied","Data":"fe94aacdf3b8c844dc9abab1e415854e4e47ea212d07513a67fc4c1411f63f3f"} Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.540515 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:43:25 crc kubenswrapper[5012]: E0219 05:43:25.542259 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-barbican-api:current\\\"\"" pod="openstack/barbican-db-sync-jzclm" podUID="a34a979c-9102-471f-9678-048fd5198cb8" Feb 19 05:43:25 crc kubenswrapper[5012]: E0219 05:43:25.542529 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-watcher-api:current\\\"\"" pod="openstack/watcher-db-sync-cdj57" podUID="89f14c4e-147e-4a05-a8d9-63b93aaad4a4" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.565231 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdkkv\" (UniqueName: \"kubernetes.io/projected/f06c7918-a7b3-4041-bd16-63a73e47bf13-kube-api-access-rdkkv\") pod \"f06c7918-a7b3-4041-bd16-63a73e47bf13\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.565712 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f06c7918-a7b3-4041-bd16-63a73e47bf13-logs\") pod \"f06c7918-a7b3-4041-bd16-63a73e47bf13\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.565938 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f06c7918-a7b3-4041-bd16-63a73e47bf13-horizon-secret-key\") pod \"f06c7918-a7b3-4041-bd16-63a73e47bf13\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.566195 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb1825de-9782-4820-96aa-d4909a0f7820-logs\") pod \"cb1825de-9782-4820-96aa-d4909a0f7820\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.566366 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f06c7918-a7b3-4041-bd16-63a73e47bf13-logs" (OuterVolumeSpecName: "logs") pod "f06c7918-a7b3-4041-bd16-63a73e47bf13" (UID: "f06c7918-a7b3-4041-bd16-63a73e47bf13"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.566485 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-config-data\") pod \"f06c7918-a7b3-4041-bd16-63a73e47bf13\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.566776 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb1825de-9782-4820-96aa-d4909a0f7820-logs" (OuterVolumeSpecName: "logs") pod "cb1825de-9782-4820-96aa-d4909a0f7820" (UID: "cb1825de-9782-4820-96aa-d4909a0f7820"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.566789 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-config-data\") pod \"cb1825de-9782-4820-96aa-d4909a0f7820\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.567292 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-scripts\") pod \"cb1825de-9782-4820-96aa-d4909a0f7820\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.568960 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb1825de-9782-4820-96aa-d4909a0f7820-horizon-secret-key\") pod \"cb1825de-9782-4820-96aa-d4909a0f7820\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.569066 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9stt\" (UniqueName: \"kubernetes.io/projected/cb1825de-9782-4820-96aa-d4909a0f7820-kube-api-access-p9stt\") pod \"cb1825de-9782-4820-96aa-d4909a0f7820\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.569153 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-scripts\") pod \"f06c7918-a7b3-4041-bd16-63a73e47bf13\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.567765 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-config-data" (OuterVolumeSpecName: "config-data") pod "f06c7918-a7b3-4041-bd16-63a73e47bf13" (UID: "f06c7918-a7b3-4041-bd16-63a73e47bf13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.567760 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-config-data" (OuterVolumeSpecName: "config-data") pod "cb1825de-9782-4820-96aa-d4909a0f7820" (UID: "cb1825de-9782-4820-96aa-d4909a0f7820"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.568044 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-scripts" (OuterVolumeSpecName: "scripts") pod "cb1825de-9782-4820-96aa-d4909a0f7820" (UID: "cb1825de-9782-4820-96aa-d4909a0f7820"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.570208 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-scripts" (OuterVolumeSpecName: "scripts") pod "f06c7918-a7b3-4041-bd16-63a73e47bf13" (UID: "f06c7918-a7b3-4041-bd16-63a73e47bf13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.571461 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb1825de-9782-4820-96aa-d4909a0f7820-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.571648 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.572038 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.572099 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.572166 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.572227 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f06c7918-a7b3-4041-bd16-63a73e47bf13-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.574520 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06c7918-a7b3-4041-bd16-63a73e47bf13-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f06c7918-a7b3-4041-bd16-63a73e47bf13" (UID: "f06c7918-a7b3-4041-bd16-63a73e47bf13"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.574613 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb1825de-9782-4820-96aa-d4909a0f7820-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "cb1825de-9782-4820-96aa-d4909a0f7820" (UID: "cb1825de-9782-4820-96aa-d4909a0f7820"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.574800 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb1825de-9782-4820-96aa-d4909a0f7820-kube-api-access-p9stt" (OuterVolumeSpecName: "kube-api-access-p9stt") pod "cb1825de-9782-4820-96aa-d4909a0f7820" (UID: "cb1825de-9782-4820-96aa-d4909a0f7820"). InnerVolumeSpecName "kube-api-access-p9stt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.575489 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06c7918-a7b3-4041-bd16-63a73e47bf13-kube-api-access-rdkkv" (OuterVolumeSpecName: "kube-api-access-rdkkv") pod "f06c7918-a7b3-4041-bd16-63a73e47bf13" (UID: "f06c7918-a7b3-4041-bd16-63a73e47bf13"). InnerVolumeSpecName "kube-api-access-rdkkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.674856 5012 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f06c7918-a7b3-4041-bd16-63a73e47bf13-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.674905 5012 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb1825de-9782-4820-96aa-d4909a0f7820-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.674919 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9stt\" (UniqueName: \"kubernetes.io/projected/cb1825de-9782-4820-96aa-d4909a0f7820-kube-api-access-p9stt\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.674935 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdkkv\" (UniqueName: \"kubernetes.io/projected/f06c7918-a7b3-4041-bd16-63a73e47bf13-kube-api-access-rdkkv\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.915354 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56f66dc579-dpndj"] Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.934273 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-56f66dc579-dpndj"] Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.947569 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-855998b9f9-lkm6w"] Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.953655 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-855998b9f9-lkm6w"] Feb 19 05:43:26 crc kubenswrapper[5012]: E0219 05:43:26.654993 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:current" Feb 19 05:43:26 crc kubenswrapper[5012]: E0219 05:43:26.655899 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:current" Feb 19 05:43:26 crc kubenswrapper[5012]: E0219 05:43:26.656152 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sghmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-xj7dw_openstack(b98c972c-b350-44a1-a7c5-028914fe7bfc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:43:26 crc kubenswrapper[5012]: E0219 05:43:26.657423 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-xj7dw" podUID="b98c972c-b350-44a1-a7c5-028914fe7bfc" Feb 19 05:43:26 crc kubenswrapper[5012]: I0219 05:43:26.662004 5012 scope.go:117] "RemoveContainer" containerID="d961f9b5d55a9bfaff596c3b756f78502ea40069f8fb1a18443be8e579f64c1b" Feb 19 05:43:26 crc kubenswrapper[5012]: I0219 05:43:26.731873 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb1825de-9782-4820-96aa-d4909a0f7820" path="/var/lib/kubelet/pods/cb1825de-9782-4820-96aa-d4909a0f7820/volumes" Feb 19 05:43:26 crc kubenswrapper[5012]: I0219 05:43:26.732634 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06c7918-a7b3-4041-bd16-63a73e47bf13" path="/var/lib/kubelet/pods/f06c7918-a7b3-4041-bd16-63a73e47bf13/volumes" Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.204686 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-px7xk"] Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.213008 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cdcb467fb-8tvnz"] Feb 19 05:43:27 crc kubenswrapper[5012]: W0219 05:43:27.223596 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod787f8a71_dee4_40d2_b33b_85bcfc58f921.slice/crio-22e7c478cf5c3572f072dadd10797eb555b9b7702664f2f3d3e6b1d4af431e39 WatchSource:0}: Error finding container 22e7c478cf5c3572f072dadd10797eb555b9b7702664f2f3d3e6b1d4af431e39: Status 404 returned error can't find the container with id 22e7c478cf5c3572f072dadd10797eb555b9b7702664f2f3d3e6b1d4af431e39 Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.318320 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zf89d"] Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.350699 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.556724 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb53e400-f5d7-4c86-9aab-eda61301a4cf","Type":"ContainerStarted","Data":"ed8c2a32d5ff07698cb91058f40ee14be5a75fe90e647b1bf825aa951923980d"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.558796 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdcb467fb-8tvnz" event={"ID":"6c937bbe-f068-4e5b-81ad-9455104062da","Type":"ContainerStarted","Data":"76c122b092d56fce3822adebfb83dea25f5d7b1dfa2c9ca1adcf7e290a003998"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.558853 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdcb467fb-8tvnz" event={"ID":"6c937bbe-f068-4e5b-81ad-9455104062da","Type":"ContainerStarted","Data":"0e8ce8a183c403ce190eb750bca6315d62e08338a10355a944facbcaf4ffac73"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.560476 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88a90a35-c893-4857-9f8b-9a405c96c044","Type":"ContainerStarted","Data":"080c5b8e7193f737b62e579122d8ee996388ee8f88c0a41d0bb8c0f53c6cdc33"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.560611 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" containerName="glance-log" containerID="cri-o://ac6da210c5bb5413e246a1f52c04e83d2fc86480ea465052b466d052aefc08b7" gracePeriod=30 Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.560632 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" containerName="glance-httpd" containerID="cri-o://080c5b8e7193f737b62e579122d8ee996388ee8f88c0a41d0bb8c0f53c6cdc33" gracePeriod=30 Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.564666 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-px7xk" event={"ID":"787f8a71-dee4-40d2-b33b-85bcfc58f921","Type":"ContainerStarted","Data":"c3b30cfc4d7788c5bf2800aec00271d7a398ee5903276843825107c74fa7f5b9"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.564702 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-px7xk" event={"ID":"787f8a71-dee4-40d2-b33b-85bcfc58f921","Type":"ContainerStarted","Data":"22e7c478cf5c3572f072dadd10797eb555b9b7702664f2f3d3e6b1d4af431e39"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.570612 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zf89d" event={"ID":"555a6373-5cdf-490e-b6ea-b0fb55425d28","Type":"ContainerStarted","Data":"9a5f9edac057b3de1965c26aac0927e9eaced35943e1b07d9b0176cc162f7fc5"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.570638 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zf89d" event={"ID":"555a6373-5cdf-490e-b6ea-b0fb55425d28","Type":"ContainerStarted","Data":"6d075631935c3f811666c0cc2948951facdd256267e93e94fef708b6510322b7"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.586744 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w9g6v" event={"ID":"be803869-4625-418d-bd39-bdbb4e6e0bfd","Type":"ContainerStarted","Data":"8659190e8633f7b88664c6c7e44927faf89d76ab66a53b4530e433a52d8c9664"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.589100 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=34.589080613 podStartE2EDuration="34.589080613s" podCreationTimestamp="2026-02-19 05:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:27.575713447 +0000 UTC m=+1103.609036016" watchObservedRunningTime="2026-02-19 05:43:27.589080613 +0000 UTC m=+1103.622403182" Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.597804 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c45b5647f-k799c" event={"ID":"d5eb71f6-31df-418a-98dd-11668ff38825","Type":"ContainerStarted","Data":"1740dd45d12f4fba32d28fe0edd137672168109214e3411aa79b0b01fe5420c4"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.597865 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c45b5647f-k799c" event={"ID":"d5eb71f6-31df-418a-98dd-11668ff38825","Type":"ContainerStarted","Data":"0edf70792244ac07bbfc8312a7939b51e2c1f6efdd9a9026a76bb21f0665c246"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.598022 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c45b5647f-k799c" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" containerName="horizon-log" containerID="cri-o://0edf70792244ac07bbfc8312a7939b51e2c1f6efdd9a9026a76bb21f0665c246" gracePeriod=30 Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.598162 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c45b5647f-k799c" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" containerName="horizon" containerID="cri-o://1740dd45d12f4fba32d28fe0edd137672168109214e3411aa79b0b01fe5420c4" gracePeriod=30 Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.602540 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zf89d" podStartSLOduration=19.602523051 podStartE2EDuration="19.602523051s" podCreationTimestamp="2026-02-19 05:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:27.594852818 +0000 UTC m=+1103.628175387" watchObservedRunningTime="2026-02-19 05:43:27.602523051 +0000 UTC m=+1103.635845620" Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.604266 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerStarted","Data":"e454f72d42b6df4ccbea155823e52fa4dbc71ac17be418579910450da7af968d"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.613371 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-px7xk" podStartSLOduration=16.613352323 podStartE2EDuration="16.613352323s" podCreationTimestamp="2026-02-19 05:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:27.608690106 +0000 UTC m=+1103.642012665" watchObservedRunningTime="2026-02-19 05:43:27.613352323 +0000 UTC m=+1103.646674892" Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.613415 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75cc7d9585-x8r8l" event={"ID":"7c163961-185c-418b-a0f5-a4d55b59f3ec","Type":"ContainerStarted","Data":"55079917653f6fec11a6880998a2eb1b86a3b903487d3ecb0aa13cd966d7990e"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.613449 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75cc7d9585-x8r8l" event={"ID":"7c163961-185c-418b-a0f5-a4d55b59f3ec","Type":"ContainerStarted","Data":"3fcdc6a7de1157e87df26c6381be0f82492f8c4422bc5e6ab2f42667c4a696ee"} Feb 19 05:43:27 crc kubenswrapper[5012]: E0219 05:43:27.614037 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:current\\\"\"" pod="openstack/cinder-db-sync-xj7dw" podUID="b98c972c-b350-44a1-a7c5-028914fe7bfc" Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.626540 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-w9g6v" podStartSLOduration=3.6619611389999998 podStartE2EDuration="40.626522574s" podCreationTimestamp="2026-02-19 05:42:47 +0000 UTC" firstStartedPulling="2026-02-19 05:42:48.338002479 +0000 UTC m=+1064.371325048" lastFinishedPulling="2026-02-19 05:43:25.302563904 +0000 UTC m=+1101.335886483" observedRunningTime="2026-02-19 05:43:27.623652942 +0000 UTC m=+1103.656975511" watchObservedRunningTime="2026-02-19 05:43:27.626522574 +0000 UTC m=+1103.659845143" Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.687092 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-75cc7d9585-x8r8l" podStartSLOduration=2.6878909 podStartE2EDuration="36.687069806s" podCreationTimestamp="2026-02-19 05:42:51 +0000 UTC" firstStartedPulling="2026-02-19 05:42:52.742332928 +0000 UTC m=+1068.775655497" lastFinishedPulling="2026-02-19 05:43:26.741511824 +0000 UTC m=+1102.774834403" observedRunningTime="2026-02-19 05:43:27.667012622 +0000 UTC m=+1103.700335191" watchObservedRunningTime="2026-02-19 05:43:27.687069806 +0000 UTC m=+1103.720392375" Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.697864 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5c45b5647f-k799c" podStartSLOduration=4.294761211 podStartE2EDuration="40.697841687s" podCreationTimestamp="2026-02-19 05:42:47 +0000 UTC" firstStartedPulling="2026-02-19 05:42:48.960386609 +0000 UTC m=+1064.993709168" lastFinishedPulling="2026-02-19 05:43:25.363467075 +0000 UTC m=+1101.396789644" observedRunningTime="2026-02-19 05:43:27.651392039 +0000 UTC m=+1103.684714618" watchObservedRunningTime="2026-02-19 05:43:27.697841687 +0000 UTC m=+1103.731164256" Feb 19 05:43:28 crc kubenswrapper[5012]: I0219 05:43:28.140527 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:43:28 crc kubenswrapper[5012]: I0219 05:43:28.622632 5012 generic.go:334] "Generic (PLEG): container finished" podID="88a90a35-c893-4857-9f8b-9a405c96c044" containerID="080c5b8e7193f737b62e579122d8ee996388ee8f88c0a41d0bb8c0f53c6cdc33" exitCode=143 Feb 19 05:43:28 crc kubenswrapper[5012]: I0219 05:43:28.622685 5012 generic.go:334] "Generic (PLEG): container finished" podID="88a90a35-c893-4857-9f8b-9a405c96c044" containerID="ac6da210c5bb5413e246a1f52c04e83d2fc86480ea465052b466d052aefc08b7" exitCode=143 Feb 19 05:43:28 crc kubenswrapper[5012]: I0219 05:43:28.622693 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88a90a35-c893-4857-9f8b-9a405c96c044","Type":"ContainerDied","Data":"080c5b8e7193f737b62e579122d8ee996388ee8f88c0a41d0bb8c0f53c6cdc33"} Feb 19 05:43:28 crc kubenswrapper[5012]: I0219 05:43:28.622765 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88a90a35-c893-4857-9f8b-9a405c96c044","Type":"ContainerDied","Data":"ac6da210c5bb5413e246a1f52c04e83d2fc86480ea465052b466d052aefc08b7"} Feb 19 05:43:31 crc kubenswrapper[5012]: I0219 05:43:31.881749 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:43:31 crc kubenswrapper[5012]: I0219 05:43:31.882330 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:43:32 crc kubenswrapper[5012]: I0219 05:43:32.670348 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdcb467fb-8tvnz" event={"ID":"6c937bbe-f068-4e5b-81ad-9455104062da","Type":"ContainerStarted","Data":"2584a3c6001f260f3a8f60bdf3e0d6ec9921502c46539b1bf34925a5b4a37ead"} Feb 19 05:43:32 crc kubenswrapper[5012]: I0219 05:43:32.680125 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb53e400-f5d7-4c86-9aab-eda61301a4cf","Type":"ContainerStarted","Data":"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759"} Feb 19 05:43:32 crc kubenswrapper[5012]: I0219 05:43:32.691246 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6cdcb467fb-8tvnz" podStartSLOduration=37.691228878 podStartE2EDuration="37.691228878s" podCreationTimestamp="2026-02-19 05:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:32.688667294 +0000 UTC m=+1108.721989873" watchObservedRunningTime="2026-02-19 05:43:32.691228878 +0000 UTC m=+1108.724551437" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.122505 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.155284 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-public-tls-certs\") pod \"88a90a35-c893-4857-9f8b-9a405c96c044\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.155366 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-logs\") pod \"88a90a35-c893-4857-9f8b-9a405c96c044\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.155398 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"88a90a35-c893-4857-9f8b-9a405c96c044\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.155439 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-httpd-run\") pod \"88a90a35-c893-4857-9f8b-9a405c96c044\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.155471 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-combined-ca-bundle\") pod \"88a90a35-c893-4857-9f8b-9a405c96c044\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.155546 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-config-data\") pod \"88a90a35-c893-4857-9f8b-9a405c96c044\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.155613 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhpwq\" (UniqueName: \"kubernetes.io/projected/88a90a35-c893-4857-9f8b-9a405c96c044-kube-api-access-fhpwq\") pod \"88a90a35-c893-4857-9f8b-9a405c96c044\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.155640 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-scripts\") pod \"88a90a35-c893-4857-9f8b-9a405c96c044\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.156105 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "88a90a35-c893-4857-9f8b-9a405c96c044" (UID: "88a90a35-c893-4857-9f8b-9a405c96c044"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.158242 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-logs" (OuterVolumeSpecName: "logs") pod "88a90a35-c893-4857-9f8b-9a405c96c044" (UID: "88a90a35-c893-4857-9f8b-9a405c96c044"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.160050 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "88a90a35-c893-4857-9f8b-9a405c96c044" (UID: "88a90a35-c893-4857-9f8b-9a405c96c044"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.161340 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-scripts" (OuterVolumeSpecName: "scripts") pod "88a90a35-c893-4857-9f8b-9a405c96c044" (UID: "88a90a35-c893-4857-9f8b-9a405c96c044"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.163027 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88a90a35-c893-4857-9f8b-9a405c96c044-kube-api-access-fhpwq" (OuterVolumeSpecName: "kube-api-access-fhpwq") pod "88a90a35-c893-4857-9f8b-9a405c96c044" (UID: "88a90a35-c893-4857-9f8b-9a405c96c044"). InnerVolumeSpecName "kube-api-access-fhpwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.191630 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88a90a35-c893-4857-9f8b-9a405c96c044" (UID: "88a90a35-c893-4857-9f8b-9a405c96c044"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.211221 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "88a90a35-c893-4857-9f8b-9a405c96c044" (UID: "88a90a35-c893-4857-9f8b-9a405c96c044"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.218018 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-config-data" (OuterVolumeSpecName: "config-data") pod "88a90a35-c893-4857-9f8b-9a405c96c044" (UID: "88a90a35-c893-4857-9f8b-9a405c96c044"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.258083 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhpwq\" (UniqueName: \"kubernetes.io/projected/88a90a35-c893-4857-9f8b-9a405c96c044-kube-api-access-fhpwq\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.258119 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.258129 5012 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.258140 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.258176 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.258185 5012 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.258195 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.258206 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.277792 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.360258 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.693115 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88a90a35-c893-4857-9f8b-9a405c96c044","Type":"ContainerDied","Data":"ecf3b5a274f4488d8ab70a5b1867720561b8843e1c2bc81491a836b7a8a78bb9"} Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.693211 5012 scope.go:117] "RemoveContainer" containerID="080c5b8e7193f737b62e579122d8ee996388ee8f88c0a41d0bb8c0f53c6cdc33" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.694286 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.696353 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb53e400-f5d7-4c86-9aab-eda61301a4cf","Type":"ContainerStarted","Data":"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6"} Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.696400 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerName="glance-log" containerID="cri-o://84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759" gracePeriod=30 Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.696463 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerName="glance-httpd" containerID="cri-o://27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6" gracePeriod=30 Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.701104 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerStarted","Data":"5011a2da1b6766de9dceb07b094e5e5b90457583e5b1d7f21e441d5bc980ef81"} Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.726389 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=39.72636314 podStartE2EDuration="39.72636314s" podCreationTimestamp="2026-02-19 05:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:33.720490333 +0000 UTC m=+1109.753812912" watchObservedRunningTime="2026-02-19 05:43:33.72636314 +0000 UTC m=+1109.759685709" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.738200 5012 scope.go:117] "RemoveContainer" containerID="ac6da210c5bb5413e246a1f52c04e83d2fc86480ea465052b466d052aefc08b7" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.745357 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.755134 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.787675 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:43:33 crc kubenswrapper[5012]: E0219 05:43:33.788348 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" containerName="glance-httpd" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.788361 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" containerName="glance-httpd" Feb 19 05:43:33 crc kubenswrapper[5012]: E0219 05:43:33.788374 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" containerName="glance-log" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.788379 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" containerName="glance-log" Feb 19 05:43:33 crc kubenswrapper[5012]: E0219 05:43:33.788395 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="init" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.788401 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="init" Feb 19 05:43:33 crc kubenswrapper[5012]: E0219 05:43:33.788416 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="dnsmasq-dns" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.788423 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="dnsmasq-dns" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.788580 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" containerName="glance-httpd" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.788596 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" containerName="glance-log" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.788604 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="dnsmasq-dns" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.789432 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.792253 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.791991 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.864387 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.971957 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5tcx\" (UniqueName: \"kubernetes.io/projected/74c05972-714b-4cc7-97f6-d4a2c205eb08-kube-api-access-q5tcx\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.972100 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-config-data\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.972130 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-scripts\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.972291 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.972341 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.972358 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.972378 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-logs\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.972393 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074247 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074319 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074341 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074355 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-logs\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074369 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074458 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5tcx\" (UniqueName: \"kubernetes.io/projected/74c05972-714b-4cc7-97f6-d4a2c205eb08-kube-api-access-q5tcx\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074499 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-config-data\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074519 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-scripts\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074826 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.075120 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.075275 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-logs\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.085119 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-scripts\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.093284 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.093286 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5tcx\" (UniqueName: \"kubernetes.io/projected/74c05972-714b-4cc7-97f6-d4a2c205eb08-kube-api-access-q5tcx\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.093614 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.096945 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-config-data\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.106750 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.409062 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.589381 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.693036 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-combined-ca-bundle\") pod \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.693172 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-httpd-run\") pod \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.693389 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-scripts\") pod \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.693427 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-logs\") pod \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.693459 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.693476 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-config-data\") pod \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.693510 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vglb8\" (UniqueName: \"kubernetes.io/projected/eb53e400-f5d7-4c86-9aab-eda61301a4cf-kube-api-access-vglb8\") pod \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.693631 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-internal-tls-certs\") pod \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.708908 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "eb53e400-f5d7-4c86-9aab-eda61301a4cf" (UID: "eb53e400-f5d7-4c86-9aab-eda61301a4cf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.712220 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-logs" (OuterVolumeSpecName: "logs") pod "eb53e400-f5d7-4c86-9aab-eda61301a4cf" (UID: "eb53e400-f5d7-4c86-9aab-eda61301a4cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.724909 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "eb53e400-f5d7-4c86-9aab-eda61301a4cf" (UID: "eb53e400-f5d7-4c86-9aab-eda61301a4cf"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.725939 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb53e400-f5d7-4c86-9aab-eda61301a4cf-kube-api-access-vglb8" (OuterVolumeSpecName: "kube-api-access-vglb8") pod "eb53e400-f5d7-4c86-9aab-eda61301a4cf" (UID: "eb53e400-f5d7-4c86-9aab-eda61301a4cf"). InnerVolumeSpecName "kube-api-access-vglb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.737808 5012 generic.go:334] "Generic (PLEG): container finished" podID="be803869-4625-418d-bd39-bdbb4e6e0bfd" containerID="8659190e8633f7b88664c6c7e44927faf89d76ab66a53b4530e433a52d8c9664" exitCode=0 Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.747238 5012 generic.go:334] "Generic (PLEG): container finished" podID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerID="27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6" exitCode=0 Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.747270 5012 generic.go:334] "Generic (PLEG): container finished" podID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerID="84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759" exitCode=143 Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.747387 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.755822 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-scripts" (OuterVolumeSpecName: "scripts") pod "eb53e400-f5d7-4c86-9aab-eda61301a4cf" (UID: "eb53e400-f5d7-4c86-9aab-eda61301a4cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.757877 5012 generic.go:334] "Generic (PLEG): container finished" podID="555a6373-5cdf-490e-b6ea-b0fb55425d28" containerID="9a5f9edac057b3de1965c26aac0927e9eaced35943e1b07d9b0176cc162f7fc5" exitCode=0 Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.762446 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" path="/var/lib/kubelet/pods/88a90a35-c893-4857-9f8b-9a405c96c044/volumes" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.771551 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-config-data" (OuterVolumeSpecName: "config-data") pod "eb53e400-f5d7-4c86-9aab-eda61301a4cf" (UID: "eb53e400-f5d7-4c86-9aab-eda61301a4cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.772548 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb53e400-f5d7-4c86-9aab-eda61301a4cf" (UID: "eb53e400-f5d7-4c86-9aab-eda61301a4cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.791438 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eb53e400-f5d7-4c86-9aab-eda61301a4cf" (UID: "eb53e400-f5d7-4c86-9aab-eda61301a4cf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.798004 5012 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.798037 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.798051 5012 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.798062 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.798072 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.798114 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.798125 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.798136 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vglb8\" (UniqueName: \"kubernetes.io/projected/eb53e400-f5d7-4c86-9aab-eda61301a4cf-kube-api-access-vglb8\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.821742 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.823227 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w9g6v" event={"ID":"be803869-4625-418d-bd39-bdbb4e6e0bfd","Type":"ContainerDied","Data":"8659190e8633f7b88664c6c7e44927faf89d76ab66a53b4530e433a52d8c9664"} Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.823277 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb53e400-f5d7-4c86-9aab-eda61301a4cf","Type":"ContainerDied","Data":"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6"} Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.823313 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb53e400-f5d7-4c86-9aab-eda61301a4cf","Type":"ContainerDied","Data":"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759"} Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.823328 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb53e400-f5d7-4c86-9aab-eda61301a4cf","Type":"ContainerDied","Data":"ed8c2a32d5ff07698cb91058f40ee14be5a75fe90e647b1bf825aa951923980d"} Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.823340 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zf89d" event={"ID":"555a6373-5cdf-490e-b6ea-b0fb55425d28","Type":"ContainerDied","Data":"9a5f9edac057b3de1965c26aac0927e9eaced35943e1b07d9b0176cc162f7fc5"} Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.823503 5012 scope.go:117] "RemoveContainer" containerID="27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.856039 5012 scope.go:117] "RemoveContainer" containerID="84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.889265 5012 scope.go:117] "RemoveContainer" containerID="27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6" Feb 19 05:43:34 crc kubenswrapper[5012]: E0219 05:43:34.889850 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6\": container with ID starting with 27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6 not found: ID does not exist" containerID="27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.889896 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6"} err="failed to get container status \"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6\": rpc error: code = NotFound desc = could not find container \"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6\": container with ID starting with 27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6 not found: ID does not exist" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.889923 5012 scope.go:117] "RemoveContainer" containerID="84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759" Feb 19 05:43:34 crc kubenswrapper[5012]: E0219 05:43:34.890128 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759\": container with ID starting with 84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759 not found: ID does not exist" containerID="84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.890149 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759"} err="failed to get container status \"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759\": rpc error: code = NotFound desc = could not find container \"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759\": container with ID starting with 84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759 not found: ID does not exist" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.890165 5012 scope.go:117] "RemoveContainer" containerID="27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.890389 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6"} err="failed to get container status \"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6\": rpc error: code = NotFound desc = could not find container \"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6\": container with ID starting with 27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6 not found: ID does not exist" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.890412 5012 scope.go:117] "RemoveContainer" containerID="84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.890576 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759"} err="failed to get container status \"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759\": rpc error: code = NotFound desc = could not find container \"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759\": container with ID starting with 84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759 not found: ID does not exist" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.906994 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.059158 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.088521 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.097809 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.113646 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:43:35 crc kubenswrapper[5012]: E0219 05:43:35.114087 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerName="glance-log" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.114105 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerName="glance-log" Feb 19 05:43:35 crc kubenswrapper[5012]: E0219 05:43:35.114141 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerName="glance-httpd" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.114148 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerName="glance-httpd" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.114394 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerName="glance-httpd" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.114418 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerName="glance-log" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.115340 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.124839 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.125025 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.127956 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.221775 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.221820 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.221889 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.221910 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.221988 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.222024 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.222052 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d4wq\" (UniqueName: \"kubernetes.io/projected/50127c6b-476e-473a-877d-00fd5feb6bb4-kube-api-access-2d4wq\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.222073 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.325155 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.325205 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.325288 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.325321 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.325357 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d4wq\" (UniqueName: \"kubernetes.io/projected/50127c6b-476e-473a-877d-00fd5feb6bb4-kube-api-access-2d4wq\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.325379 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.325398 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.325423 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.326186 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.326636 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.328187 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.331838 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.331919 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.334231 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.335099 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.348239 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d4wq\" (UniqueName: \"kubernetes.io/projected/50127c6b-476e-473a-877d-00fd5feb6bb4-kube-api-access-2d4wq\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.386125 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.470361 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.799928 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74c05972-714b-4cc7-97f6-d4a2c205eb08","Type":"ContainerStarted","Data":"884f09cbda393c2ecb1a2ab4bc0243e004e662fb5c7beaf39c14a2c689ed4fc6"} Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.006869 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.116762 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w9g6v" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.198956 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.256718 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-scripts\") pod \"be803869-4625-418d-bd39-bdbb4e6e0bfd\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.257232 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-config-data\") pod \"be803869-4625-418d-bd39-bdbb4e6e0bfd\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.257264 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be803869-4625-418d-bd39-bdbb4e6e0bfd-logs\") pod \"be803869-4625-418d-bd39-bdbb4e6e0bfd\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.257440 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rnk9\" (UniqueName: \"kubernetes.io/projected/be803869-4625-418d-bd39-bdbb4e6e0bfd-kube-api-access-6rnk9\") pod \"be803869-4625-418d-bd39-bdbb4e6e0bfd\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.257552 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-combined-ca-bundle\") pod \"be803869-4625-418d-bd39-bdbb4e6e0bfd\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.257766 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be803869-4625-418d-bd39-bdbb4e6e0bfd-logs" (OuterVolumeSpecName: "logs") pod "be803869-4625-418d-bd39-bdbb4e6e0bfd" (UID: "be803869-4625-418d-bd39-bdbb4e6e0bfd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.258222 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be803869-4625-418d-bd39-bdbb4e6e0bfd-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.268470 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be803869-4625-418d-bd39-bdbb4e6e0bfd-kube-api-access-6rnk9" (OuterVolumeSpecName: "kube-api-access-6rnk9") pod "be803869-4625-418d-bd39-bdbb4e6e0bfd" (UID: "be803869-4625-418d-bd39-bdbb4e6e0bfd"). InnerVolumeSpecName "kube-api-access-6rnk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.268567 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-scripts" (OuterVolumeSpecName: "scripts") pod "be803869-4625-418d-bd39-bdbb4e6e0bfd" (UID: "be803869-4625-418d-bd39-bdbb4e6e0bfd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.295396 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be803869-4625-418d-bd39-bdbb4e6e0bfd" (UID: "be803869-4625-418d-bd39-bdbb4e6e0bfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.296455 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-config-data" (OuterVolumeSpecName: "config-data") pod "be803869-4625-418d-bd39-bdbb4e6e0bfd" (UID: "be803869-4625-418d-bd39-bdbb4e6e0bfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.309843 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.311075 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.359680 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-scripts\") pod \"555a6373-5cdf-490e-b6ea-b0fb55425d28\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360103 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-fernet-keys\") pod \"555a6373-5cdf-490e-b6ea-b0fb55425d28\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360136 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-credential-keys\") pod \"555a6373-5cdf-490e-b6ea-b0fb55425d28\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360219 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nknw\" (UniqueName: \"kubernetes.io/projected/555a6373-5cdf-490e-b6ea-b0fb55425d28-kube-api-access-2nknw\") pod \"555a6373-5cdf-490e-b6ea-b0fb55425d28\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360248 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle\") pod \"555a6373-5cdf-490e-b6ea-b0fb55425d28\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360291 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-config-data\") pod \"555a6373-5cdf-490e-b6ea-b0fb55425d28\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360701 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360716 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360727 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rnk9\" (UniqueName: \"kubernetes.io/projected/be803869-4625-418d-bd39-bdbb4e6e0bfd-kube-api-access-6rnk9\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360738 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.373819 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/555a6373-5cdf-490e-b6ea-b0fb55425d28-kube-api-access-2nknw" (OuterVolumeSpecName: "kube-api-access-2nknw") pod "555a6373-5cdf-490e-b6ea-b0fb55425d28" (UID: "555a6373-5cdf-490e-b6ea-b0fb55425d28"). InnerVolumeSpecName "kube-api-access-2nknw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.374748 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "555a6373-5cdf-490e-b6ea-b0fb55425d28" (UID: "555a6373-5cdf-490e-b6ea-b0fb55425d28"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.374904 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-scripts" (OuterVolumeSpecName: "scripts") pod "555a6373-5cdf-490e-b6ea-b0fb55425d28" (UID: "555a6373-5cdf-490e-b6ea-b0fb55425d28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.376347 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "555a6373-5cdf-490e-b6ea-b0fb55425d28" (UID: "555a6373-5cdf-490e-b6ea-b0fb55425d28"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: E0219 05:43:36.391047 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle podName:555a6373-5cdf-490e-b6ea-b0fb55425d28 nodeName:}" failed. No retries permitted until 2026-02-19 05:43:36.891017063 +0000 UTC m=+1112.924339632 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle") pod "555a6373-5cdf-490e-b6ea-b0fb55425d28" (UID: "555a6373-5cdf-490e-b6ea-b0fb55425d28") : error deleting /var/lib/kubelet/pods/555a6373-5cdf-490e-b6ea-b0fb55425d28/volume-subpaths: remove /var/lib/kubelet/pods/555a6373-5cdf-490e-b6ea-b0fb55425d28/volume-subpaths: no such file or directory Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.394293 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-config-data" (OuterVolumeSpecName: "config-data") pod "555a6373-5cdf-490e-b6ea-b0fb55425d28" (UID: "555a6373-5cdf-490e-b6ea-b0fb55425d28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.462283 5012 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.462363 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nknw\" (UniqueName: \"kubernetes.io/projected/555a6373-5cdf-490e-b6ea-b0fb55425d28-kube-api-access-2nknw\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.462374 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.462384 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.462394 5012 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.724590 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" path="/var/lib/kubelet/pods/eb53e400-f5d7-4c86-9aab-eda61301a4cf/volumes" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.826705 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w9g6v" event={"ID":"be803869-4625-418d-bd39-bdbb4e6e0bfd","Type":"ContainerDied","Data":"1bf63392d872a713c6fdde27be345aac65be8a37d2e0427ef52052d66a795c4c"} Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.826791 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bf63392d872a713c6fdde27be345aac65be8a37d2e0427ef52052d66a795c4c" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.826786 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w9g6v" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.839225 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50127c6b-476e-473a-877d-00fd5feb6bb4","Type":"ContainerStarted","Data":"a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5"} Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.839271 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50127c6b-476e-473a-877d-00fd5feb6bb4","Type":"ContainerStarted","Data":"9f6241f52b36b9304734fa39b59f3e6db469ba06ede3efe69c7f2c281f65bc4e"} Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.845939 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zf89d" event={"ID":"555a6373-5cdf-490e-b6ea-b0fb55425d28","Type":"ContainerDied","Data":"6d075631935c3f811666c0cc2948951facdd256267e93e94fef708b6510322b7"} Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.845977 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d075631935c3f811666c0cc2948951facdd256267e93e94fef708b6510322b7" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.846048 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.862556 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74c05972-714b-4cc7-97f6-d4a2c205eb08","Type":"ContainerStarted","Data":"bc599b5c1fd3d067ccfdc4bf4a2aeefedf9b008aba3555832c150c823f5147fd"} Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.862611 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74c05972-714b-4cc7-97f6-d4a2c205eb08","Type":"ContainerStarted","Data":"c0addf6cc4fd08d20a94ea77846955a7e25ecc21b7ac41291cb427ac997c6c7a"} Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.899342 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.8993231059999998 podStartE2EDuration="3.899323106s" podCreationTimestamp="2026-02-19 05:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:36.884583156 +0000 UTC m=+1112.917905725" watchObservedRunningTime="2026-02-19 05:43:36.899323106 +0000 UTC m=+1112.932645675" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.947192 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7b574779c9-x2bsv"] Feb 19 05:43:36 crc kubenswrapper[5012]: E0219 05:43:36.949534 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555a6373-5cdf-490e-b6ea-b0fb55425d28" containerName="keystone-bootstrap" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.949559 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="555a6373-5cdf-490e-b6ea-b0fb55425d28" containerName="keystone-bootstrap" Feb 19 05:43:36 crc kubenswrapper[5012]: E0219 05:43:36.949597 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be803869-4625-418d-bd39-bdbb4e6e0bfd" containerName="placement-db-sync" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.949605 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="be803869-4625-418d-bd39-bdbb4e6e0bfd" containerName="placement-db-sync" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.949812 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="be803869-4625-418d-bd39-bdbb4e6e0bfd" containerName="placement-db-sync" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.949835 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="555a6373-5cdf-490e-b6ea-b0fb55425d28" containerName="keystone-bootstrap" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.950674 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.954656 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.954939 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.968932 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b574779c9-x2bsv"] Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.971641 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle\") pod \"555a6373-5cdf-490e-b6ea-b0fb55425d28\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.999595 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "555a6373-5cdf-490e-b6ea-b0fb55425d28" (UID: "555a6373-5cdf-490e-b6ea-b0fb55425d28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.069444 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5c6b5c5b7b-9nnqj"] Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.071746 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.073974 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-fernet-keys\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.074026 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-public-tls-certs\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.074085 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-internal-tls-certs\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.074117 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-credential-keys\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.074141 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpffc\" (UniqueName: \"kubernetes.io/projected/0e0a6a9f-d11f-4084-9742-7780b20fae75-kube-api-access-gpffc\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.074167 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-combined-ca-bundle\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.074182 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-config-data\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.074214 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-scripts\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.074328 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.078587 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.078792 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.078838 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dzmq8" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.078800 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.087827 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.099393 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c6b5c5b7b-9nnqj"] Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.175910 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-combined-ca-bundle\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.175970 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-public-tls-certs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176063 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-scripts\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176120 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-fernet-keys\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176146 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-public-tls-certs\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176171 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-internal-tls-certs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176224 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-internal-tls-certs\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176246 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-config-data\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176296 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-credential-keys\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176346 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpffc\" (UniqueName: \"kubernetes.io/projected/0e0a6a9f-d11f-4084-9742-7780b20fae75-kube-api-access-gpffc\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176372 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-combined-ca-bundle\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176407 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-config-data\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176426 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d214ce94-6c65-4641-a1e2-21f5f920ecec-logs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176452 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-scripts\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176498 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7kbf\" (UniqueName: \"kubernetes.io/projected/d214ce94-6c65-4641-a1e2-21f5f920ecec-kube-api-access-s7kbf\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.180456 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-fernet-keys\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.180544 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-scripts\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.180728 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-internal-tls-certs\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.184653 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-public-tls-certs\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.187828 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-combined-ca-bundle\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.202157 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-config-data\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.204804 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-credential-keys\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.211947 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpffc\" (UniqueName: \"kubernetes.io/projected/0e0a6a9f-d11f-4084-9742-7780b20fae75-kube-api-access-gpffc\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.252269 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6f94997dd8-cvnfv"] Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.253692 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.271195 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f94997dd8-cvnfv"] Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.272021 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.278027 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-internal-tls-certs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.278092 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-config-data\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.278135 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d214ce94-6c65-4641-a1e2-21f5f920ecec-logs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.278172 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7kbf\" (UniqueName: \"kubernetes.io/projected/d214ce94-6c65-4641-a1e2-21f5f920ecec-kube-api-access-s7kbf\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.278214 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-combined-ca-bundle\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.278228 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-public-tls-certs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.278271 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-scripts\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.279430 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d214ce94-6c65-4641-a1e2-21f5f920ecec-logs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.288320 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-scripts\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.289003 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-combined-ca-bundle\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.290239 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-public-tls-certs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.294041 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-config-data\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.294984 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-internal-tls-certs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.298531 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7kbf\" (UniqueName: \"kubernetes.io/projected/d214ce94-6c65-4641-a1e2-21f5f920ecec-kube-api-access-s7kbf\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.380879 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-scripts\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.383660 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-logs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.383701 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-combined-ca-bundle\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.383774 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkk4z\" (UniqueName: \"kubernetes.io/projected/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-kube-api-access-zkk4z\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.383905 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-public-tls-certs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.383935 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-config-data\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.383979 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-internal-tls-certs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.403020 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.486620 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-internal-tls-certs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.486771 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-scripts\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.486810 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-logs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.486834 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-combined-ca-bundle\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.486901 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkk4z\" (UniqueName: \"kubernetes.io/projected/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-kube-api-access-zkk4z\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.487028 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-public-tls-certs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.487064 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-config-data\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.488059 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-logs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.532838 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-scripts\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.533103 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-internal-tls-certs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.534924 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-config-data\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.535703 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-combined-ca-bundle\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.538325 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkk4z\" (UniqueName: \"kubernetes.io/projected/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-kube-api-access-zkk4z\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.539050 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-public-tls-certs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.614088 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.149473 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c6b5c5b7b-9nnqj"] Feb 19 05:43:41 crc kubenswrapper[5012]: W0219 05:43:41.168201 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd214ce94_6c65_4641_a1e2_21f5f920ecec.slice/crio-020e2e77d5547a74ce74ede9f57616121d05cdbb046cf4e2e88cca4fa12f2d3b WatchSource:0}: Error finding container 020e2e77d5547a74ce74ede9f57616121d05cdbb046cf4e2e88cca4fa12f2d3b: Status 404 returned error can't find the container with id 020e2e77d5547a74ce74ede9f57616121d05cdbb046cf4e2e88cca4fa12f2d3b Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.284643 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f94997dd8-cvnfv"] Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.306459 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b574779c9-x2bsv"] Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.930405 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f94997dd8-cvnfv" event={"ID":"b0ce1e0a-4e51-408c-b3f8-500cf6476b96","Type":"ContainerStarted","Data":"ed50fcc4b5644658131f17e470a2e14ab8224f5d90567660d7376f21a9bc839f"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.930752 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f94997dd8-cvnfv" event={"ID":"b0ce1e0a-4e51-408c-b3f8-500cf6476b96","Type":"ContainerStarted","Data":"15fb945b89b8b44897961cb004e8122151ce64b02699d11a5abc38fcf5252b14"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.932056 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c6b5c5b7b-9nnqj" event={"ID":"d214ce94-6c65-4641-a1e2-21f5f920ecec","Type":"ContainerStarted","Data":"f9417f3089ab939acabaf087bdedc14bb6991a7978946e02fec09196a1d9ec1c"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.932080 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c6b5c5b7b-9nnqj" event={"ID":"d214ce94-6c65-4641-a1e2-21f5f920ecec","Type":"ContainerStarted","Data":"020e2e77d5547a74ce74ede9f57616121d05cdbb046cf4e2e88cca4fa12f2d3b"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.941946 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50127c6b-476e-473a-877d-00fd5feb6bb4","Type":"ContainerStarted","Data":"4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.952263 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-cdj57" event={"ID":"89f14c4e-147e-4a05-a8d9-63b93aaad4a4","Type":"ContainerStarted","Data":"d0e335ec457cf8c772f55111337cf2d1aae49da15e75b237650c2e4a19efd926"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.974784 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerStarted","Data":"bdf4b7c244764dd2879106070ed07ec4228686361067f77e4b0e731b44af052c"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.976237 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b574779c9-x2bsv" event={"ID":"0e0a6a9f-d11f-4084-9742-7780b20fae75","Type":"ContainerStarted","Data":"e8deb7af9035826b7190a2427036f8bc6012ecf3eda7fd34c5ffb11b5eb4f2b4"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.976263 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b574779c9-x2bsv" event={"ID":"0e0a6a9f-d11f-4084-9742-7780b20fae75","Type":"ContainerStarted","Data":"5ef30ae4559150a8057de9dfba533693fc362a504fa851ffbcebc2172b2e05c0"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.977220 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.979952 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jzclm" event={"ID":"a34a979c-9102-471f-9678-048fd5198cb8","Type":"ContainerStarted","Data":"8322bcc6cc3c5b2d8222ae8137e7a8ab0b73bac7b8fa9b87cd91c71100844e13"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.996515 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.996477795 podStartE2EDuration="6.996477795s" podCreationTimestamp="2026-02-19 05:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:41.960802259 +0000 UTC m=+1117.994124838" watchObservedRunningTime="2026-02-19 05:43:41.996477795 +0000 UTC m=+1118.029800364" Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.012273 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-cdj57" podStartSLOduration=2.9744668819999998 podStartE2EDuration="51.012240352s" podCreationTimestamp="2026-02-19 05:42:51 +0000 UTC" firstStartedPulling="2026-02-19 05:42:52.988901024 +0000 UTC m=+1069.022223593" lastFinishedPulling="2026-02-19 05:43:41.026674494 +0000 UTC m=+1117.059997063" observedRunningTime="2026-02-19 05:43:41.987431698 +0000 UTC m=+1118.020754267" watchObservedRunningTime="2026-02-19 05:43:42.012240352 +0000 UTC m=+1118.045562921" Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.079715 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-jzclm" podStartSLOduration=3.03960023 podStartE2EDuration="55.079695587s" podCreationTimestamp="2026-02-19 05:42:47 +0000 UTC" firstStartedPulling="2026-02-19 05:42:48.693347859 +0000 UTC m=+1064.726670428" lastFinishedPulling="2026-02-19 05:43:40.733443216 +0000 UTC m=+1116.766765785" observedRunningTime="2026-02-19 05:43:42.009244106 +0000 UTC m=+1118.042566675" watchObservedRunningTime="2026-02-19 05:43:42.079695587 +0000 UTC m=+1118.113018156" Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.090371 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7b574779c9-x2bsv" podStartSLOduration=6.090355915 podStartE2EDuration="6.090355915s" podCreationTimestamp="2026-02-19 05:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:42.044627495 +0000 UTC m=+1118.077950064" watchObservedRunningTime="2026-02-19 05:43:42.090355915 +0000 UTC m=+1118.123678484" Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.990538 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f94997dd8-cvnfv" event={"ID":"b0ce1e0a-4e51-408c-b3f8-500cf6476b96","Type":"ContainerStarted","Data":"1c337d580d168324833674cbb90b7f734c2fcb4515bb4135a581a165db4bb401"} Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.990774 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.990998 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.994472 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c6b5c5b7b-9nnqj" event={"ID":"d214ce94-6c65-4641-a1e2-21f5f920ecec","Type":"ContainerStarted","Data":"1bf5d73af424c2f421bc54586605dbed2a0980894768360700238dc093ac82ff"} Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.994631 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.997525 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xj7dw" event={"ID":"b98c972c-b350-44a1-a7c5-028914fe7bfc","Type":"ContainerStarted","Data":"8dfd0224f4b707b6bfc0133d1f07ea378c585adcdbe5ef8ea62dd0f00fb98923"} Feb 19 05:43:43 crc kubenswrapper[5012]: I0219 05:43:43.026604 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6f94997dd8-cvnfv" podStartSLOduration=6.026583382 podStartE2EDuration="6.026583382s" podCreationTimestamp="2026-02-19 05:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:43.01855846 +0000 UTC m=+1119.051881029" watchObservedRunningTime="2026-02-19 05:43:43.026583382 +0000 UTC m=+1119.059905951" Feb 19 05:43:43 crc kubenswrapper[5012]: I0219 05:43:43.049469 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5c6b5c5b7b-9nnqj" podStartSLOduration=6.049452127 podStartE2EDuration="6.049452127s" podCreationTimestamp="2026-02-19 05:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:43.045971059 +0000 UTC m=+1119.079293628" watchObservedRunningTime="2026-02-19 05:43:43.049452127 +0000 UTC m=+1119.082774696" Feb 19 05:43:43 crc kubenswrapper[5012]: I0219 05:43:43.073695 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-xj7dw" podStartSLOduration=4.143648422 podStartE2EDuration="52.073677265s" podCreationTimestamp="2026-02-19 05:42:51 +0000 UTC" firstStartedPulling="2026-02-19 05:42:53.089380579 +0000 UTC m=+1069.122703138" lastFinishedPulling="2026-02-19 05:43:41.019409412 +0000 UTC m=+1117.052731981" observedRunningTime="2026-02-19 05:43:43.070569787 +0000 UTC m=+1119.103892346" watchObservedRunningTime="2026-02-19 05:43:43.073677265 +0000 UTC m=+1119.106999824" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.007369 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.409572 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.409940 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.430276 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.430343 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.430382 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.430919 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0209690f43a6b6283a91e933f5b897e5259f5fced0261c8b5238e804ce206915"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.430980 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://0209690f43a6b6283a91e933f5b897e5259f5fced0261c8b5238e804ce206915" gracePeriod=600 Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.456060 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.469506 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.565446 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.026580 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="0209690f43a6b6283a91e933f5b897e5259f5fced0261c8b5238e804ce206915" exitCode=0 Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.026796 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"0209690f43a6b6283a91e933f5b897e5259f5fced0261c8b5238e804ce206915"} Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.028031 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"6721017012e745bfd497807b3e0766cbf7c779446215cbbe94491f729f86c6ac"} Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.028052 5012 scope.go:117] "RemoveContainer" containerID="f6b4f2485162f8c24d6693d845318234656e6a8c97d49d2e72f4427654fa319a" Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.028870 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.028913 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.471980 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.472411 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.510784 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.521443 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:46 crc kubenswrapper[5012]: I0219 05:43:46.048018 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:46 crc kubenswrapper[5012]: I0219 05:43:46.048071 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:46 crc kubenswrapper[5012]: I0219 05:43:46.370844 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:43:47 crc kubenswrapper[5012]: I0219 05:43:47.057957 5012 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 05:43:47 crc kubenswrapper[5012]: I0219 05:43:47.058465 5012 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 05:43:47 crc kubenswrapper[5012]: I0219 05:43:47.187976 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 05:43:47 crc kubenswrapper[5012]: I0219 05:43:47.188540 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 05:43:48 crc kubenswrapper[5012]: I0219 05:43:48.152754 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:48 crc kubenswrapper[5012]: I0219 05:43:48.153108 5012 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 05:43:48 crc kubenswrapper[5012]: I0219 05:43:48.154651 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:48 crc kubenswrapper[5012]: I0219 05:43:48.162387 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:43:49 crc kubenswrapper[5012]: I0219 05:43:49.080153 5012 generic.go:334] "Generic (PLEG): container finished" podID="89f14c4e-147e-4a05-a8d9-63b93aaad4a4" containerID="d0e335ec457cf8c772f55111337cf2d1aae49da15e75b237650c2e4a19efd926" exitCode=0 Feb 19 05:43:49 crc kubenswrapper[5012]: I0219 05:43:49.080212 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-cdj57" event={"ID":"89f14c4e-147e-4a05-a8d9-63b93aaad4a4","Type":"ContainerDied","Data":"d0e335ec457cf8c772f55111337cf2d1aae49da15e75b237650c2e4a19efd926"} Feb 19 05:43:49 crc kubenswrapper[5012]: I0219 05:43:49.928901 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.003748 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75cc7d9585-x8r8l"] Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.006280 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75cc7d9585-x8r8l" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon-log" containerID="cri-o://3fcdc6a7de1157e87df26c6381be0f82492f8c4422bc5e6ab2f42667c4a696ee" gracePeriod=30 Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.006594 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75cc7d9585-x8r8l" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon" containerID="cri-o://55079917653f6fec11a6880998a2eb1b86a3b903487d3ecb0aa13cd966d7990e" gracePeriod=30 Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.089754 5012 generic.go:334] "Generic (PLEG): container finished" podID="a34a979c-9102-471f-9678-048fd5198cb8" containerID="8322bcc6cc3c5b2d8222ae8137e7a8ab0b73bac7b8fa9b87cd91c71100844e13" exitCode=0 Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.089846 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jzclm" event={"ID":"a34a979c-9102-471f-9678-048fd5198cb8","Type":"ContainerDied","Data":"8322bcc6cc3c5b2d8222ae8137e7a8ab0b73bac7b8fa9b87cd91c71100844e13"} Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.755431 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-cdj57" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.792630 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq87n\" (UniqueName: \"kubernetes.io/projected/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-kube-api-access-fq87n\") pod \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.792741 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-combined-ca-bundle\") pod \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.792819 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-db-sync-config-data\") pod \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.792877 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-config-data\") pod \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.801093 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "89f14c4e-147e-4a05-a8d9-63b93aaad4a4" (UID: "89f14c4e-147e-4a05-a8d9-63b93aaad4a4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.825511 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89f14c4e-147e-4a05-a8d9-63b93aaad4a4" (UID: "89f14c4e-147e-4a05-a8d9-63b93aaad4a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.833466 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-kube-api-access-fq87n" (OuterVolumeSpecName: "kube-api-access-fq87n") pod "89f14c4e-147e-4a05-a8d9-63b93aaad4a4" (UID: "89f14c4e-147e-4a05-a8d9-63b93aaad4a4"). InnerVolumeSpecName "kube-api-access-fq87n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.872852 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-config-data" (OuterVolumeSpecName: "config-data") pod "89f14c4e-147e-4a05-a8d9-63b93aaad4a4" (UID: "89f14c4e-147e-4a05-a8d9-63b93aaad4a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.898665 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq87n\" (UniqueName: \"kubernetes.io/projected/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-kube-api-access-fq87n\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.899420 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.899554 5012 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.899649 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.108687 5012 generic.go:334] "Generic (PLEG): container finished" podID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerID="55079917653f6fec11a6880998a2eb1b86a3b903487d3ecb0aa13cd966d7990e" exitCode=0 Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.108768 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75cc7d9585-x8r8l" event={"ID":"7c163961-185c-418b-a0f5-a4d55b59f3ec","Type":"ContainerDied","Data":"55079917653f6fec11a6880998a2eb1b86a3b903487d3ecb0aa13cd966d7990e"} Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.113098 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-cdj57" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.113133 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-cdj57" event={"ID":"89f14c4e-147e-4a05-a8d9-63b93aaad4a4","Type":"ContainerDied","Data":"416ab3f77df9ec5ea4c8eb669473dd003dd38b711f8cc41ad525b40979b07e19"} Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.113152 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="416ab3f77df9ec5ea4c8eb669473dd003dd38b711f8cc41ad525b40979b07e19" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.425897 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:43:51 crc kubenswrapper[5012]: E0219 05:43:51.426765 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f14c4e-147e-4a05-a8d9-63b93aaad4a4" containerName="watcher-db-sync" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.426782 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f14c4e-147e-4a05-a8d9-63b93aaad4a4" containerName="watcher-db-sync" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.428863 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f14c4e-147e-4a05-a8d9-63b93aaad4a4" containerName="watcher-db-sync" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.434011 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.442934 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-6chdl" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.445923 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.446093 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.447444 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.450659 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.460245 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jzclm" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.482097 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 19 05:43:51 crc kubenswrapper[5012]: E0219 05:43:51.482528 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a34a979c-9102-471f-9678-048fd5198cb8" containerName="barbican-db-sync" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.482544 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a34a979c-9102-471f-9678-048fd5198cb8" containerName="barbican-db-sync" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.482738 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a34a979c-9102-471f-9678-048fd5198cb8" containerName="barbican-db-sync" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.483338 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.492431 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.497154 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.502557 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.512985 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z95mm\" (UniqueName: \"kubernetes.io/projected/a34a979c-9102-471f-9678-048fd5198cb8-kube-api-access-z95mm\") pod \"a34a979c-9102-471f-9678-048fd5198cb8\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513189 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-db-sync-config-data\") pod \"a34a979c-9102-471f-9678-048fd5198cb8\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513232 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-combined-ca-bundle\") pod \"a34a979c-9102-471f-9678-048fd5198cb8\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513456 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513487 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4778529-f7d0-482b-bd67-003aaa7ca0ae-config-data\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513509 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5wsl\" (UniqueName: \"kubernetes.io/projected/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-kube-api-access-m5wsl\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513536 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513561 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpx9g\" (UniqueName: \"kubernetes.io/projected/7fdaa495-6cde-409a-871a-e334ca3f2a91-kube-api-access-cpx9g\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513625 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-config-data\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513659 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k94vz\" (UniqueName: \"kubernetes.io/projected/d4778529-f7d0-482b-bd67-003aaa7ca0ae-kube-api-access-k94vz\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513686 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513705 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdaa495-6cde-409a-871a-e334ca3f2a91-logs\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513741 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4778529-f7d0-482b-bd67-003aaa7ca0ae-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513768 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513792 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-config-data\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513816 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4778529-f7d0-482b-bd67-003aaa7ca0ae-logs\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513851 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-logs\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.518829 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.523198 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a34a979c-9102-471f-9678-048fd5198cb8" (UID: "a34a979c-9102-471f-9678-048fd5198cb8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.529561 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a34a979c-9102-471f-9678-048fd5198cb8-kube-api-access-z95mm" (OuterVolumeSpecName: "kube-api-access-z95mm") pod "a34a979c-9102-471f-9678-048fd5198cb8" (UID: "a34a979c-9102-471f-9678-048fd5198cb8"). InnerVolumeSpecName "kube-api-access-z95mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.554420 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a34a979c-9102-471f-9678-048fd5198cb8" (UID: "a34a979c-9102-471f-9678-048fd5198cb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615252 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-logs\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615337 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615358 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4778529-f7d0-482b-bd67-003aaa7ca0ae-config-data\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615380 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5wsl\" (UniqueName: \"kubernetes.io/projected/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-kube-api-access-m5wsl\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615406 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615423 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpx9g\" (UniqueName: \"kubernetes.io/projected/7fdaa495-6cde-409a-871a-e334ca3f2a91-kube-api-access-cpx9g\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615486 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-config-data\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615514 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k94vz\" (UniqueName: \"kubernetes.io/projected/d4778529-f7d0-482b-bd67-003aaa7ca0ae-kube-api-access-k94vz\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615540 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615559 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdaa495-6cde-409a-871a-e334ca3f2a91-logs\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615590 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4778529-f7d0-482b-bd67-003aaa7ca0ae-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615610 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615631 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-config-data\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615651 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4778529-f7d0-482b-bd67-003aaa7ca0ae-logs\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615706 5012 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615718 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615729 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z95mm\" (UniqueName: \"kubernetes.io/projected/a34a979c-9102-471f-9678-048fd5198cb8-kube-api-access-z95mm\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.616171 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4778529-f7d0-482b-bd67-003aaa7ca0ae-logs\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.616490 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-logs\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.621262 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.621631 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdaa495-6cde-409a-871a-e334ca3f2a91-logs\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.627756 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.629415 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4778529-f7d0-482b-bd67-003aaa7ca0ae-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.631940 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4778529-f7d0-482b-bd67-003aaa7ca0ae-config-data\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.632708 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-config-data\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.634104 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.637910 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-config-data\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.640840 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5wsl\" (UniqueName: \"kubernetes.io/projected/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-kube-api-access-m5wsl\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.663883 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k94vz\" (UniqueName: \"kubernetes.io/projected/d4778529-f7d0-482b-bd67-003aaa7ca0ae-kube-api-access-k94vz\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.663997 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpx9g\" (UniqueName: \"kubernetes.io/projected/7fdaa495-6cde-409a-871a-e334ca3f2a91-kube-api-access-cpx9g\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.664996 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.785876 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.807445 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.821240 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.882678 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75cc7d9585-x8r8l" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.157:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.157:8080: connect: connection refused" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.126720 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jzclm" event={"ID":"a34a979c-9102-471f-9678-048fd5198cb8","Type":"ContainerDied","Data":"15ee0e6aea238f0e16da222d8f4f49d691f91234f9216b9e8070275343d6a969"} Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.127110 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15ee0e6aea238f0e16da222d8f4f49d691f91234f9216b9e8070275343d6a969" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.127001 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jzclm" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.135177 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerStarted","Data":"8a02fea3b4cd70626ac243cec71c2d7a481574c8f18cffc243a46c68a245c413"} Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.136022 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="ceilometer-central-agent" containerID="cri-o://e454f72d42b6df4ccbea155823e52fa4dbc71ac17be418579910450da7af968d" gracePeriod=30 Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.136189 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.136217 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="sg-core" containerID="cri-o://bdf4b7c244764dd2879106070ed07ec4228686361067f77e4b0e731b44af052c" gracePeriod=30 Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.136154 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="proxy-httpd" containerID="cri-o://8a02fea3b4cd70626ac243cec71c2d7a481574c8f18cffc243a46c68a245c413" gracePeriod=30 Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.136229 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="ceilometer-notification-agent" containerID="cri-o://5011a2da1b6766de9dceb07b094e5e5b90457583e5b1d7f21e441d5bc980ef81" gracePeriod=30 Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.369683 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.599200762 podStartE2EDuration="1m5.36966736s" podCreationTimestamp="2026-02-19 05:42:47 +0000 UTC" firstStartedPulling="2026-02-19 05:42:48.786483529 +0000 UTC m=+1064.819806098" lastFinishedPulling="2026-02-19 05:43:51.556950127 +0000 UTC m=+1127.590272696" observedRunningTime="2026-02-19 05:43:52.172913915 +0000 UTC m=+1128.206236484" watchObservedRunningTime="2026-02-19 05:43:52.36966736 +0000 UTC m=+1128.402989929" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.374696 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-779bfc8b79-ffj7v"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.376358 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.382224 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.382427 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hg9kp" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.382651 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.391795 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.407225 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-779bfc8b79-ffj7v"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.437292 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-config-data\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.437353 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-combined-ca-bundle\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.437470 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-config-data-custom\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.437504 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9133f0f1-2d9e-462e-ba56-8a206f61bd03-logs\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.437547 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsmj9\" (UniqueName: \"kubernetes.io/projected/9133f0f1-2d9e-462e-ba56-8a206f61bd03-kube-api-access-bsmj9\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.453417 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5bb75756b-hd4xs"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.454990 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.460673 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 19 05:43:52 crc kubenswrapper[5012]: W0219 05:43:52.505343 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4778529_f7d0_482b_bd67_003aaa7ca0ae.slice/crio-8b9702811b20b1bd747f944495d8fb979a7f48a2280de5fb9506d28c3b15880e WatchSource:0}: Error finding container 8b9702811b20b1bd747f944495d8fb979a7f48a2280de5fb9506d28c3b15880e: Status 404 returned error can't find the container with id 8b9702811b20b1bd747f944495d8fb979a7f48a2280de5fb9506d28c3b15880e Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.519121 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6788477597-b25r4"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.521254 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.536728 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5bb75756b-hd4xs"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.542592 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-config\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.542638 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-nb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.542677 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-config-data-custom\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543176 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-config-data-custom\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543244 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-config-data\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543272 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9133f0f1-2d9e-462e-ba56-8a206f61bd03-logs\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543380 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-sb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543422 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsmj9\" (UniqueName: \"kubernetes.io/projected/9133f0f1-2d9e-462e-ba56-8a206f61bd03-kube-api-access-bsmj9\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543469 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-546k4\" (UniqueName: \"kubernetes.io/projected/ee216ad2-2baf-4bba-a3fe-81acf9218af0-kube-api-access-546k4\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543495 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-config-data\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543512 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhgbb\" (UniqueName: \"kubernetes.io/projected/d4384807-a690-4e84-8b2f-d1f82a6e801b-kube-api-access-nhgbb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543564 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-combined-ca-bundle\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543644 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-svc\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543711 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee216ad2-2baf-4bba-a3fe-81acf9218af0-logs\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543733 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-swift-storage-0\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543783 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-combined-ca-bundle\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.545007 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9133f0f1-2d9e-462e-ba56-8a206f61bd03-logs\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.551461 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.553955 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-config-data-custom\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.561249 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.563169 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-combined-ca-bundle\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.564110 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-config-data\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.580202 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsmj9\" (UniqueName: \"kubernetes.io/projected/9133f0f1-2d9e-462e-ba56-8a206f61bd03-kube-api-access-bsmj9\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.587710 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6788477597-b25r4"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.647923 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-config\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648009 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-nb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648097 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-config-data-custom\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648139 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-config-data\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648202 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-sb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648249 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-546k4\" (UniqueName: \"kubernetes.io/projected/ee216ad2-2baf-4bba-a3fe-81acf9218af0-kube-api-access-546k4\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648272 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhgbb\" (UniqueName: \"kubernetes.io/projected/d4384807-a690-4e84-8b2f-d1f82a6e801b-kube-api-access-nhgbb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648355 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-svc\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648408 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee216ad2-2baf-4bba-a3fe-81acf9218af0-logs\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648432 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-swift-storage-0\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648676 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-combined-ca-bundle\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.655471 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-sb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.657819 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-nb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.697528 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-config-data-custom\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.699335 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-svc\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.700294 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-config-data\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.700803 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-swift-storage-0\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.700914 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee216ad2-2baf-4bba-a3fe-81acf9218af0-logs\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.701255 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-config\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.701778 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-combined-ca-bundle\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.712587 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-546k4\" (UniqueName: \"kubernetes.io/projected/ee216ad2-2baf-4bba-a3fe-81acf9218af0-kube-api-access-546k4\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.712975 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.716745 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhgbb\" (UniqueName: \"kubernetes.io/projected/d4384807-a690-4e84-8b2f-d1f82a6e801b-kube-api-access-nhgbb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.725648 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-778557f86b-hp4xf"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.727885 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.732773 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.753624 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data-custom\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.753741 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j52pf\" (UniqueName: \"kubernetes.io/projected/6dfce017-0fe6-4613-910b-2c0f88af8bb2-kube-api-access-j52pf\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.754042 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.754160 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dfce017-0fe6-4613-910b-2c0f88af8bb2-logs\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.754258 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-combined-ca-bundle\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.772366 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-778557f86b-hp4xf"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.803852 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.862466 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dfce017-0fe6-4613-910b-2c0f88af8bb2-logs\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.862575 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-combined-ca-bundle\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.862652 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data-custom\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.862679 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j52pf\" (UniqueName: \"kubernetes.io/projected/6dfce017-0fe6-4613-910b-2c0f88af8bb2-kube-api-access-j52pf\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.862751 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.863534 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dfce017-0fe6-4613-910b-2c0f88af8bb2-logs\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.874485 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-combined-ca-bundle\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.875701 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.882012 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data-custom\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.887096 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j52pf\" (UniqueName: \"kubernetes.io/projected/6dfce017-0fe6-4613-910b-2c0f88af8bb2-kube-api-access-j52pf\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.888428 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.941049 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.167458 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerStarted","Data":"27cdd4f4a5ee55d08e9db9c6e3380ff5674b5137557956c3e1a7be05a457c3b6"} Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.172525 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"17c5eb4a-b8b3-4178-b5a0-2a37211266e6","Type":"ContainerStarted","Data":"65be4651ae750a28ef010be6e5423125eee000964a57e57affa6249b22b2eb91"} Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.172584 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"17c5eb4a-b8b3-4178-b5a0-2a37211266e6","Type":"ContainerStarted","Data":"1f8ff58170fed0be8d7680ffb942663aaa5ec3f1c388578dbd28c9e5432c8ac1"} Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.175429 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"d4778529-f7d0-482b-bd67-003aaa7ca0ae","Type":"ContainerStarted","Data":"8b9702811b20b1bd747f944495d8fb979a7f48a2280de5fb9506d28c3b15880e"} Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.188904 5012 generic.go:334] "Generic (PLEG): container finished" podID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerID="8a02fea3b4cd70626ac243cec71c2d7a481574c8f18cffc243a46c68a245c413" exitCode=0 Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.188927 5012 generic.go:334] "Generic (PLEG): container finished" podID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerID="bdf4b7c244764dd2879106070ed07ec4228686361067f77e4b0e731b44af052c" exitCode=2 Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.188936 5012 generic.go:334] "Generic (PLEG): container finished" podID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerID="e454f72d42b6df4ccbea155823e52fa4dbc71ac17be418579910450da7af968d" exitCode=0 Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.188955 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerDied","Data":"8a02fea3b4cd70626ac243cec71c2d7a481574c8f18cffc243a46c68a245c413"} Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.188976 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerDied","Data":"bdf4b7c244764dd2879106070ed07ec4228686361067f77e4b0e731b44af052c"} Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.188987 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerDied","Data":"e454f72d42b6df4ccbea155823e52fa4dbc71ac17be418579910450da7af968d"} Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.295226 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-779bfc8b79-ffj7v"] Feb 19 05:43:53 crc kubenswrapper[5012]: W0219 05:43:53.304815 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9133f0f1_2d9e_462e_ba56_8a206f61bd03.slice/crio-abf890b7e383d28b08223c36e7492c70ca45beb20890d51bf20f4f69a23f948d WatchSource:0}: Error finding container abf890b7e383d28b08223c36e7492c70ca45beb20890d51bf20f4f69a23f948d: Status 404 returned error can't find the container with id abf890b7e383d28b08223c36e7492c70ca45beb20890d51bf20f4f69a23f948d Feb 19 05:43:53 crc kubenswrapper[5012]: W0219 05:43:53.328143 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee216ad2_2baf_4bba_a3fe_81acf9218af0.slice/crio-704d494d5a0e851513f788eaa7222c23d808d93c67dca6c0698a6c35c566b0f6 WatchSource:0}: Error finding container 704d494d5a0e851513f788eaa7222c23d808d93c67dca6c0698a6c35c566b0f6: Status 404 returned error can't find the container with id 704d494d5a0e851513f788eaa7222c23d808d93c67dca6c0698a6c35c566b0f6 Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.329425 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5bb75756b-hd4xs"] Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.336173 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-778557f86b-hp4xf"] Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.420808 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6788477597-b25r4"] Feb 19 05:43:53 crc kubenswrapper[5012]: W0219 05:43:53.428608 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4384807_a690_4e84_8b2f_d1f82a6e801b.slice/crio-6d270cd40f45bfefaf4186556b1652f3082a09963c33d3dc4823e1b2c33258e1 WatchSource:0}: Error finding container 6d270cd40f45bfefaf4186556b1652f3082a09963c33d3dc4823e1b2c33258e1: Status 404 returned error can't find the container with id 6d270cd40f45bfefaf4186556b1652f3082a09963c33d3dc4823e1b2c33258e1 Feb 19 05:43:53 crc kubenswrapper[5012]: E0219 05:43:53.699681 5012 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb98c972c_b350_44a1_a7c5_028914fe7bfc.slice/crio-conmon-8dfd0224f4b707b6bfc0133d1f07ea378c585adcdbe5ef8ea62dd0f00fb98923.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb98c972c_b350_44a1_a7c5_028914fe7bfc.slice/crio-8dfd0224f4b707b6bfc0133d1f07ea378c585adcdbe5ef8ea62dd0f00fb98923.scope\": RecentStats: unable to find data in memory cache]" Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.204344 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" event={"ID":"ee216ad2-2baf-4bba-a3fe-81acf9218af0","Type":"ContainerStarted","Data":"704d494d5a0e851513f788eaa7222c23d808d93c67dca6c0698a6c35c566b0f6"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.205993 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"17c5eb4a-b8b3-4178-b5a0-2a37211266e6","Type":"ContainerStarted","Data":"4c7e7897254d29f17ce8fe214986663a24ab7ca2a73051f5e809d6f1daf31a29"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.206256 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.207286 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778557f86b-hp4xf" event={"ID":"6dfce017-0fe6-4613-910b-2c0f88af8bb2","Type":"ContainerStarted","Data":"fe85e93188d20a0757f4ff89e6ad6e7cd4a5a7fc9569c748b0fe68bce7f50e89"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.207341 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778557f86b-hp4xf" event={"ID":"6dfce017-0fe6-4613-910b-2c0f88af8bb2","Type":"ContainerStarted","Data":"831c1b2e39b299e04f560adb31739eb0da9f5a5165d710984ac8d2ab457658e9"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.208723 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-779bfc8b79-ffj7v" event={"ID":"9133f0f1-2d9e-462e-ba56-8a206f61bd03","Type":"ContainerStarted","Data":"abf890b7e383d28b08223c36e7492c70ca45beb20890d51bf20f4f69a23f948d"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.213996 5012 generic.go:334] "Generic (PLEG): container finished" podID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerID="5011a2da1b6766de9dceb07b094e5e5b90457583e5b1d7f21e441d5bc980ef81" exitCode=0 Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.214078 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerDied","Data":"5011a2da1b6766de9dceb07b094e5e5b90457583e5b1d7f21e441d5bc980ef81"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.217982 5012 generic.go:334] "Generic (PLEG): container finished" podID="b98c972c-b350-44a1-a7c5-028914fe7bfc" containerID="8dfd0224f4b707b6bfc0133d1f07ea378c585adcdbe5ef8ea62dd0f00fb98923" exitCode=0 Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.218091 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xj7dw" event={"ID":"b98c972c-b350-44a1-a7c5-028914fe7bfc","Type":"ContainerDied","Data":"8dfd0224f4b707b6bfc0133d1f07ea378c585adcdbe5ef8ea62dd0f00fb98923"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.219767 5012 generic.go:334] "Generic (PLEG): container finished" podID="d4384807-a690-4e84-8b2f-d1f82a6e801b" containerID="d5440c73e1cd6d63cff9dfa2d45367d2d8fced5e8574ebcc35f43099ef7046cb" exitCode=0 Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.219801 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6788477597-b25r4" event={"ID":"d4384807-a690-4e84-8b2f-d1f82a6e801b","Type":"ContainerDied","Data":"d5440c73e1cd6d63cff9dfa2d45367d2d8fced5e8574ebcc35f43099ef7046cb"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.219825 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6788477597-b25r4" event={"ID":"d4384807-a690-4e84-8b2f-d1f82a6e801b","Type":"ContainerStarted","Data":"6d270cd40f45bfefaf4186556b1652f3082a09963c33d3dc4823e1b2c33258e1"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.244358 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.24433289 podStartE2EDuration="3.24433289s" podCreationTimestamp="2026-02-19 05:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:54.224930242 +0000 UTC m=+1130.258252811" watchObservedRunningTime="2026-02-19 05:43:54.24433289 +0000 UTC m=+1130.277655459" Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.925237 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7f669f7d76-2qg4s"] Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.926924 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.928745 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.932946 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.948534 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f669f7d76-2qg4s"] Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.036721 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-config-data-custom\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.037411 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-internal-tls-certs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.037461 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrz6n\" (UniqueName: \"kubernetes.io/projected/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-kube-api-access-xrz6n\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.037507 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-combined-ca-bundle\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.037800 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-logs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.038182 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-config-data\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.038356 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-public-tls-certs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.146186 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-combined-ca-bundle\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.146774 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-logs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.147013 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-config-data\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.147135 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-public-tls-certs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.147684 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-config-data-custom\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.147837 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-internal-tls-certs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.147905 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrz6n\" (UniqueName: \"kubernetes.io/projected/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-kube-api-access-xrz6n\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.148754 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-logs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.154376 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-config-data-custom\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.154413 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-public-tls-certs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.154568 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-internal-tls-certs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.161790 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-combined-ca-bundle\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.167455 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-config-data\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.174874 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrz6n\" (UniqueName: \"kubernetes.io/projected/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-kube-api-access-xrz6n\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.244538 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.778546 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.879073 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-config-data\") pod \"b98c972c-b350-44a1-a7c5-028914fe7bfc\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.879555 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-scripts\") pod \"b98c972c-b350-44a1-a7c5-028914fe7bfc\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.879575 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b98c972c-b350-44a1-a7c5-028914fe7bfc-etc-machine-id\") pod \"b98c972c-b350-44a1-a7c5-028914fe7bfc\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.879607 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-combined-ca-bundle\") pod \"b98c972c-b350-44a1-a7c5-028914fe7bfc\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.879631 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sghmp\" (UniqueName: \"kubernetes.io/projected/b98c972c-b350-44a1-a7c5-028914fe7bfc-kube-api-access-sghmp\") pod \"b98c972c-b350-44a1-a7c5-028914fe7bfc\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.879730 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-db-sync-config-data\") pod \"b98c972c-b350-44a1-a7c5-028914fe7bfc\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.880021 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b98c972c-b350-44a1-a7c5-028914fe7bfc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b98c972c-b350-44a1-a7c5-028914fe7bfc" (UID: "b98c972c-b350-44a1-a7c5-028914fe7bfc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.882286 5012 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b98c972c-b350-44a1-a7c5-028914fe7bfc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.889898 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b98c972c-b350-44a1-a7c5-028914fe7bfc" (UID: "b98c972c-b350-44a1-a7c5-028914fe7bfc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.891972 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b98c972c-b350-44a1-a7c5-028914fe7bfc-kube-api-access-sghmp" (OuterVolumeSpecName: "kube-api-access-sghmp") pod "b98c972c-b350-44a1-a7c5-028914fe7bfc" (UID: "b98c972c-b350-44a1-a7c5-028914fe7bfc"). InnerVolumeSpecName "kube-api-access-sghmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.892838 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-scripts" (OuterVolumeSpecName: "scripts") pod "b98c972c-b350-44a1-a7c5-028914fe7bfc" (UID: "b98c972c-b350-44a1-a7c5-028914fe7bfc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.931066 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b98c972c-b350-44a1-a7c5-028914fe7bfc" (UID: "b98c972c-b350-44a1-a7c5-028914fe7bfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.964818 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-config-data" (OuterVolumeSpecName: "config-data") pod "b98c972c-b350-44a1-a7c5-028914fe7bfc" (UID: "b98c972c-b350-44a1-a7c5-028914fe7bfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.985290 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.985341 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.985351 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.985362 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sghmp\" (UniqueName: \"kubernetes.io/projected/b98c972c-b350-44a1-a7c5-028914fe7bfc-kube-api-access-sghmp\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.985384 5012 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.280699 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778557f86b-hp4xf" event={"ID":"6dfce017-0fe6-4613-910b-2c0f88af8bb2","Type":"ContainerStarted","Data":"0a1428fe2110ceec4a472e101ab178eb05366af10098eb515e5229853c308ba9"} Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.282784 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.282821 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.290781 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-779bfc8b79-ffj7v" event={"ID":"9133f0f1-2d9e-462e-ba56-8a206f61bd03","Type":"ContainerStarted","Data":"c43bb3f9e5482ef4ca13a42f07ab087f195a8d34284dcae978d735032438fc88"} Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.293725 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerDied","Data":"7e8d6baa89d2887533fedd350653f8112826dc19a88f8494ecc19699d4368a44"} Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.293761 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e8d6baa89d2887533fedd350653f8112826dc19a88f8494ecc19699d4368a44" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.308959 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xj7dw" event={"ID":"b98c972c-b350-44a1-a7c5-028914fe7bfc","Type":"ContainerDied","Data":"a84681fa37d45c4925f780e8954023bd4c066ed1cbb2bb7d3fe3e2f3209e4c8b"} Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.309021 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a84681fa37d45c4925f780e8954023bd4c066ed1cbb2bb7d3fe3e2f3209e4c8b" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.309093 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.322923 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6788477597-b25r4" event={"ID":"d4384807-a690-4e84-8b2f-d1f82a6e801b","Type":"ContainerStarted","Data":"21ac8b4f6fffa511d4235a3c327d3a8cd35ac9450d983816320d5195b11ee8bb"} Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.323711 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.343281 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-778557f86b-hp4xf" podStartSLOduration=4.343255725 podStartE2EDuration="4.343255725s" podCreationTimestamp="2026-02-19 05:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:56.314981544 +0000 UTC m=+1132.348304113" watchObservedRunningTime="2026-02-19 05:43:56.343255725 +0000 UTC m=+1132.376578294" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.394415 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6788477597-b25r4" podStartSLOduration=4.39439228 podStartE2EDuration="4.39439228s" podCreationTimestamp="2026-02-19 05:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:56.367260768 +0000 UTC m=+1132.400583337" watchObservedRunningTime="2026-02-19 05:43:56.39439228 +0000 UTC m=+1132.427714849" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.420264 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f669f7d76-2qg4s"] Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.445711 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.498992 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-run-httpd\") pod \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.499109 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-log-httpd\") pod \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.499383 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-scripts\") pod \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.499457 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-combined-ca-bundle\") pod \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.499492 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qkmd\" (UniqueName: \"kubernetes.io/projected/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-kube-api-access-6qkmd\") pod \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.499602 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-config-data\") pod \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.499665 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-sg-core-conf-yaml\") pod \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.511802 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-kube-api-access-6qkmd" (OuterVolumeSpecName: "kube-api-access-6qkmd") pod "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" (UID: "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b"). InnerVolumeSpecName "kube-api-access-6qkmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.512243 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-scripts" (OuterVolumeSpecName: "scripts") pod "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" (UID: "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.512710 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" (UID: "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.513052 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" (UID: "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.605588 5012 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.605620 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.605629 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qkmd\" (UniqueName: \"kubernetes.io/projected/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-kube-api-access-6qkmd\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.605640 5012 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.609488 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" (UID: "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.707520 5012 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.787647 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:43:56 crc kubenswrapper[5012]: E0219 05:43:56.788544 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="ceilometer-notification-agent" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.788564 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="ceilometer-notification-agent" Feb 19 05:43:56 crc kubenswrapper[5012]: E0219 05:43:56.788588 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98c972c-b350-44a1-a7c5-028914fe7bfc" containerName="cinder-db-sync" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.788596 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98c972c-b350-44a1-a7c5-028914fe7bfc" containerName="cinder-db-sync" Feb 19 05:43:56 crc kubenswrapper[5012]: E0219 05:43:56.788624 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="proxy-httpd" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.788648 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="proxy-httpd" Feb 19 05:43:56 crc kubenswrapper[5012]: E0219 05:43:56.788661 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="sg-core" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.788667 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="sg-core" Feb 19 05:43:56 crc kubenswrapper[5012]: E0219 05:43:56.788694 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="ceilometer-central-agent" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.788703 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="ceilometer-central-agent" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.789120 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="sg-core" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.789149 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98c972c-b350-44a1-a7c5-028914fe7bfc" containerName="cinder-db-sync" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.789180 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="ceilometer-central-agent" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.789200 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="proxy-httpd" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.789229 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="ceilometer-notification-agent" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.805592 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.805969 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.806135 5012 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.807750 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.811555 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.811755 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-c2ldt" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.811826 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.819746 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.863446 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6788477597-b25r4"] Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.887566 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757c7596dc-4ccqz"] Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.889146 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.917365 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757c7596dc-4ccqz"] Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.920588 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9c1c12b-f055-417b-9300-706f98b0f8cc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.920624 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-scripts\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.920650 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.920677 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cch2x\" (UniqueName: \"kubernetes.io/projected/a9c1c12b-f055-417b-9300-706f98b0f8cc-kube-api-access-cch2x\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.920728 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.920753 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022402 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022461 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022518 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-sb\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022547 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-config\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022586 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-svc\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022614 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-swift-storage-0\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022636 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9c1c12b-f055-417b-9300-706f98b0f8cc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022655 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-scripts\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022679 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022712 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-nb\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022727 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cch2x\" (UniqueName: \"kubernetes.io/projected/a9c1c12b-f055-417b-9300-706f98b0f8cc-kube-api-access-cch2x\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022745 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wg28\" (UniqueName: \"kubernetes.io/projected/6c5e24dc-215e-4f19-8cf6-241bf57648f9-kube-api-access-9wg28\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.023690 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9c1c12b-f055-417b-9300-706f98b0f8cc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.042914 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.048534 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-scripts\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.051270 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.055630 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.059836 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cch2x\" (UniqueName: \"kubernetes.io/projected/a9c1c12b-f055-417b-9300-706f98b0f8cc-kube-api-access-cch2x\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.094592 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.096804 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.099534 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.124234 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-nb\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.124272 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wg28\" (UniqueName: \"kubernetes.io/projected/6c5e24dc-215e-4f19-8cf6-241bf57648f9-kube-api-access-9wg28\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.124392 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-sb\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.124421 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-config\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.124461 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-svc\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.124506 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-swift-storage-0\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.125617 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-nb\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.126130 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-swift-storage-0\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.127608 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-svc\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.128231 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-config\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.130093 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-sb\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.133021 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.135138 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" (UID: "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.148459 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wg28\" (UniqueName: \"kubernetes.io/projected/6c5e24dc-215e-4f19-8cf6-241bf57648f9-kube-api-access-9wg28\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.148901 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.223941 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-config-data" (OuterVolumeSpecName: "config-data") pod "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" (UID: "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.225804 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226094 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226143 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b154229-6752-44d3-8b53-96147254af19-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226164 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data-custom\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226190 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-scripts\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226227 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b154229-6752-44d3-8b53-96147254af19-logs\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226244 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226265 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxflh\" (UniqueName: \"kubernetes.io/projected/6b154229-6752-44d3-8b53-96147254af19-kube-api-access-rxflh\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226397 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226411 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.332475 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.334468 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b154229-6752-44d3-8b53-96147254af19-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.334593 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data-custom\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.334687 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-scripts\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.334758 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b154229-6752-44d3-8b53-96147254af19-logs\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.334824 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.334890 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxflh\" (UniqueName: \"kubernetes.io/projected/6b154229-6752-44d3-8b53-96147254af19-kube-api-access-rxflh\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.336409 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b154229-6752-44d3-8b53-96147254af19-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.336838 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b154229-6752-44d3-8b53-96147254af19-logs\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.338509 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.345515 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.347229 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-scripts\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.352264 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data-custom\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.354727 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.366779 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxflh\" (UniqueName: \"kubernetes.io/projected/6b154229-6752-44d3-8b53-96147254af19-kube-api-access-rxflh\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.380621 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerStarted","Data":"ba936c2a2295accf188d98dabc618f0a4eb4fcc0b863a622cffddbfebb246fc3"} Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.404474 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" event={"ID":"ee216ad2-2baf-4bba-a3fe-81acf9218af0","Type":"ContainerStarted","Data":"e232d1d5952ee862af66af4dfaf70596d6f99efe0dff23de65977b02a1393257"} Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.413516 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.097796548 podStartE2EDuration="6.41350214s" podCreationTimestamp="2026-02-19 05:43:51 +0000 UTC" firstStartedPulling="2026-02-19 05:43:52.535976049 +0000 UTC m=+1128.569298618" lastFinishedPulling="2026-02-19 05:43:55.851681641 +0000 UTC m=+1131.885004210" observedRunningTime="2026-02-19 05:43:57.40513688 +0000 UTC m=+1133.438459449" watchObservedRunningTime="2026-02-19 05:43:57.41350214 +0000 UTC m=+1133.446824709" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.438173 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-779bfc8b79-ffj7v" event={"ID":"9133f0f1-2d9e-462e-ba56-8a206f61bd03","Type":"ContainerStarted","Data":"86550ff652eb777d0ef3deb2390b7cf98e95a512390779f2190d07e8bf35ef59"} Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.465772 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-779bfc8b79-ffj7v" podStartSLOduration=2.9357263749999998 podStartE2EDuration="5.465751603s" podCreationTimestamp="2026-02-19 05:43:52 +0000 UTC" firstStartedPulling="2026-02-19 05:43:53.321669624 +0000 UTC m=+1129.354992193" lastFinishedPulling="2026-02-19 05:43:55.851694852 +0000 UTC m=+1131.885017421" observedRunningTime="2026-02-19 05:43:57.463337852 +0000 UTC m=+1133.496660421" watchObservedRunningTime="2026-02-19 05:43:57.465751603 +0000 UTC m=+1133.499074172" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.474912 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"d4778529-f7d0-482b-bd67-003aaa7ca0ae","Type":"ContainerStarted","Data":"047631b0cd4cbda2df14045f1f332c69a1e0680f36346341dbc4eecc5870407f"} Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.529890 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.535562 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.537789 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f669f7d76-2qg4s" event={"ID":"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c","Type":"ContainerStarted","Data":"873e505cdb32ddd1a1e8218374bdf9f511db9dbe463d5a652ec922bdd0b36c47"} Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.537829 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f669f7d76-2qg4s" event={"ID":"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c","Type":"ContainerStarted","Data":"2dcecd941288a9e0ccb5ed44503be98025613ca3b1582d3509bf0a5378ca32f5"} Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.628922 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=3.287505265 podStartE2EDuration="6.628902073s" podCreationTimestamp="2026-02-19 05:43:51 +0000 UTC" firstStartedPulling="2026-02-19 05:43:52.510328555 +0000 UTC m=+1128.543651134" lastFinishedPulling="2026-02-19 05:43:55.851725373 +0000 UTC m=+1131.885047942" observedRunningTime="2026-02-19 05:43:57.552943734 +0000 UTC m=+1133.586266303" watchObservedRunningTime="2026-02-19 05:43:57.628902073 +0000 UTC m=+1133.662224642" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.672135 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.729113 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.750700 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.753715 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.761398 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.761647 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.762381 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.767130 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:43:57 crc kubenswrapper[5012]: W0219 05:43:57.800744 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9c1c12b_f055_417b_9300_706f98b0f8cc.slice/crio-700c4e558c2fc29ae4be5133cfa56a73c8a1d1f1fb5ea15ea68c90c99124dbe1 WatchSource:0}: Error finding container 700c4e558c2fc29ae4be5133cfa56a73c8a1d1f1fb5ea15ea68c90c99124dbe1: Status 404 returned error can't find the container with id 700c4e558c2fc29ae4be5133cfa56a73c8a1d1f1fb5ea15ea68c90c99124dbe1 Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.853881 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-config-data\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.854338 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52m69\" (UniqueName: \"kubernetes.io/projected/236f420e-8855-41f8-8b25-813be7b28799-kube-api-access-52m69\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.854367 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.854433 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.854458 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-log-httpd\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.854688 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-run-httpd\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.854716 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-scripts\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.887035 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757c7596dc-4ccqz"] Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.956433 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-config-data\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.956520 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52m69\" (UniqueName: \"kubernetes.io/projected/236f420e-8855-41f8-8b25-813be7b28799-kube-api-access-52m69\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.956550 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.956579 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.956602 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-log-httpd\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.956658 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-run-httpd\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.956674 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-scripts\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.960480 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-log-httpd\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.960829 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-scripts\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.961059 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-run-httpd\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.973058 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.973487 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-config-data\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.980919 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.000095 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52m69\" (UniqueName: \"kubernetes.io/projected/236f420e-8855-41f8-8b25-813be7b28799-kube-api-access-52m69\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.129678 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.193254 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.562018 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" event={"ID":"ee216ad2-2baf-4bba-a3fe-81acf9218af0","Type":"ContainerStarted","Data":"a698fa179e07b7f282602fdb0616fddbf0515fd3c161369bc45c4a476a8b36fa"} Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.570400 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" event={"ID":"6c5e24dc-215e-4f19-8cf6-241bf57648f9","Type":"ContainerStarted","Data":"b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8"} Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.570435 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" event={"ID":"6c5e24dc-215e-4f19-8cf6-241bf57648f9","Type":"ContainerStarted","Data":"ead40496902b159e9bebd9ba1a479551b8997a76aa96d1285d684eafe66d05a5"} Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.575090 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a9c1c12b-f055-417b-9300-706f98b0f8cc","Type":"ContainerStarted","Data":"700c4e558c2fc29ae4be5133cfa56a73c8a1d1f1fb5ea15ea68c90c99124dbe1"} Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.588121 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" podStartSLOduration=4.065359082 podStartE2EDuration="6.588101907s" podCreationTimestamp="2026-02-19 05:43:52 +0000 UTC" firstStartedPulling="2026-02-19 05:43:53.331896781 +0000 UTC m=+1129.365219350" lastFinishedPulling="2026-02-19 05:43:55.854639606 +0000 UTC m=+1131.887962175" observedRunningTime="2026-02-19 05:43:58.580947538 +0000 UTC m=+1134.614270107" watchObservedRunningTime="2026-02-19 05:43:58.588101907 +0000 UTC m=+1134.621424476" Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.590208 5012 generic.go:334] "Generic (PLEG): container finished" podID="d5eb71f6-31df-418a-98dd-11668ff38825" containerID="1740dd45d12f4fba32d28fe0edd137672168109214e3411aa79b0b01fe5420c4" exitCode=137 Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.590275 5012 generic.go:334] "Generic (PLEG): container finished" podID="d5eb71f6-31df-418a-98dd-11668ff38825" containerID="0edf70792244ac07bbfc8312a7939b51e2c1f6efdd9a9026a76bb21f0665c246" exitCode=137 Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.590417 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c45b5647f-k799c" event={"ID":"d5eb71f6-31df-418a-98dd-11668ff38825","Type":"ContainerDied","Data":"1740dd45d12f4fba32d28fe0edd137672168109214e3411aa79b0b01fe5420c4"} Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.590455 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c45b5647f-k799c" event={"ID":"d5eb71f6-31df-418a-98dd-11668ff38825","Type":"ContainerDied","Data":"0edf70792244ac07bbfc8312a7939b51e2c1f6efdd9a9026a76bb21f0665c246"} Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.593599 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f669f7d76-2qg4s" event={"ID":"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c","Type":"ContainerStarted","Data":"f7fd69acd3ad1cf95ced27b912d252abccdac551c1477e1dec5ef9901f79fef6"} Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.593879 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6788477597-b25r4" podUID="d4384807-a690-4e84-8b2f-d1f82a6e801b" containerName="dnsmasq-dns" containerID="cri-o://21ac8b4f6fffa511d4235a3c327d3a8cd35ac9450d983816320d5195b11ee8bb" gracePeriod=10 Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.594320 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.595127 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.735417 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" path="/var/lib/kubelet/pods/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b/volumes" Feb 19 05:43:59 crc kubenswrapper[5012]: I0219 05:43:59.358705 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7f669f7d76-2qg4s" podStartSLOduration=5.358683401 podStartE2EDuration="5.358683401s" podCreationTimestamp="2026-02-19 05:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:58.635808086 +0000 UTC m=+1134.669130655" watchObservedRunningTime="2026-02-19 05:43:59.358683401 +0000 UTC m=+1135.392005970" Feb 19 05:43:59 crc kubenswrapper[5012]: I0219 05:43:59.363703 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:43:59 crc kubenswrapper[5012]: I0219 05:43:59.601444 5012 generic.go:334] "Generic (PLEG): container finished" podID="d4384807-a690-4e84-8b2f-d1f82a6e801b" containerID="21ac8b4f6fffa511d4235a3c327d3a8cd35ac9450d983816320d5195b11ee8bb" exitCode=0 Feb 19 05:43:59 crc kubenswrapper[5012]: I0219 05:43:59.601514 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6788477597-b25r4" event={"ID":"d4384807-a690-4e84-8b2f-d1f82a6e801b","Type":"ContainerDied","Data":"21ac8b4f6fffa511d4235a3c327d3a8cd35ac9450d983816320d5195b11ee8bb"} Feb 19 05:43:59 crc kubenswrapper[5012]: I0219 05:43:59.603057 5012 generic.go:334] "Generic (PLEG): container finished" podID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" containerID="b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8" exitCode=0 Feb 19 05:43:59 crc kubenswrapper[5012]: I0219 05:43:59.603118 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" event={"ID":"6c5e24dc-215e-4f19-8cf6-241bf57648f9","Type":"ContainerDied","Data":"b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8"} Feb 19 05:43:59 crc kubenswrapper[5012]: W0219 05:43:59.761860 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b154229_6752_44d3_8b53_96147254af19.slice/crio-653027f305aefd23daa068d7977fcd142c7d791955ffc465b3adbe51a1e997a7 WatchSource:0}: Error finding container 653027f305aefd23daa068d7977fcd142c7d791955ffc465b3adbe51a1e997a7: Status 404 returned error can't find the container with id 653027f305aefd23daa068d7977fcd142c7d791955ffc465b3adbe51a1e997a7 Feb 19 05:43:59 crc kubenswrapper[5012]: I0219 05:43:59.971242 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:43:59.998725 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-config-data\") pod \"d5eb71f6-31df-418a-98dd-11668ff38825\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:43:59.998763 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d5eb71f6-31df-418a-98dd-11668ff38825-horizon-secret-key\") pod \"d5eb71f6-31df-418a-98dd-11668ff38825\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:43:59.998789 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5eb71f6-31df-418a-98dd-11668ff38825-logs\") pod \"d5eb71f6-31df-418a-98dd-11668ff38825\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:43:59.998895 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq85g\" (UniqueName: \"kubernetes.io/projected/d5eb71f6-31df-418a-98dd-11668ff38825-kube-api-access-sq85g\") pod \"d5eb71f6-31df-418a-98dd-11668ff38825\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:43:59.998943 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-scripts\") pod \"d5eb71f6-31df-418a-98dd-11668ff38825\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:43:59.999923 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5eb71f6-31df-418a-98dd-11668ff38825-logs" (OuterVolumeSpecName: "logs") pod "d5eb71f6-31df-418a-98dd-11668ff38825" (UID: "d5eb71f6-31df-418a-98dd-11668ff38825"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.004571 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5eb71f6-31df-418a-98dd-11668ff38825-kube-api-access-sq85g" (OuterVolumeSpecName: "kube-api-access-sq85g") pod "d5eb71f6-31df-418a-98dd-11668ff38825" (UID: "d5eb71f6-31df-418a-98dd-11668ff38825"). InnerVolumeSpecName "kube-api-access-sq85g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.006995 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5eb71f6-31df-418a-98dd-11668ff38825-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d5eb71f6-31df-418a-98dd-11668ff38825" (UID: "d5eb71f6-31df-418a-98dd-11668ff38825"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.034090 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-scripts" (OuterVolumeSpecName: "scripts") pod "d5eb71f6-31df-418a-98dd-11668ff38825" (UID: "d5eb71f6-31df-418a-98dd-11668ff38825"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.041490 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-config-data" (OuterVolumeSpecName: "config-data") pod "d5eb71f6-31df-418a-98dd-11668ff38825" (UID: "d5eb71f6-31df-418a-98dd-11668ff38825"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.101221 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.101583 5012 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d5eb71f6-31df-418a-98dd-11668ff38825-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.101595 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5eb71f6-31df-418a-98dd-11668ff38825-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.101608 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq85g\" (UniqueName: \"kubernetes.io/projected/d5eb71f6-31df-418a-98dd-11668ff38825-kube-api-access-sq85g\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.101618 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.339749 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.411118 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhgbb\" (UniqueName: \"kubernetes.io/projected/d4384807-a690-4e84-8b2f-d1f82a6e801b-kube-api-access-nhgbb\") pod \"d4384807-a690-4e84-8b2f-d1f82a6e801b\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.411222 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-nb\") pod \"d4384807-a690-4e84-8b2f-d1f82a6e801b\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.411314 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-swift-storage-0\") pod \"d4384807-a690-4e84-8b2f-d1f82a6e801b\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.412475 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-config\") pod \"d4384807-a690-4e84-8b2f-d1f82a6e801b\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.412517 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-sb\") pod \"d4384807-a690-4e84-8b2f-d1f82a6e801b\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.412578 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-svc\") pod \"d4384807-a690-4e84-8b2f-d1f82a6e801b\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.416889 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4384807-a690-4e84-8b2f-d1f82a6e801b-kube-api-access-nhgbb" (OuterVolumeSpecName: "kube-api-access-nhgbb") pod "d4384807-a690-4e84-8b2f-d1f82a6e801b" (UID: "d4384807-a690-4e84-8b2f-d1f82a6e801b"). InnerVolumeSpecName "kube-api-access-nhgbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.465741 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.480029 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4384807-a690-4e84-8b2f-d1f82a6e801b" (UID: "d4384807-a690-4e84-8b2f-d1f82a6e801b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.487757 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d4384807-a690-4e84-8b2f-d1f82a6e801b" (UID: "d4384807-a690-4e84-8b2f-d1f82a6e801b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.500586 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d4384807-a690-4e84-8b2f-d1f82a6e801b" (UID: "d4384807-a690-4e84-8b2f-d1f82a6e801b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.508763 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-config" (OuterVolumeSpecName: "config") pod "d4384807-a690-4e84-8b2f-d1f82a6e801b" (UID: "d4384807-a690-4e84-8b2f-d1f82a6e801b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.509892 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d4384807-a690-4e84-8b2f-d1f82a6e801b" (UID: "d4384807-a690-4e84-8b2f-d1f82a6e801b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.517934 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.517970 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhgbb\" (UniqueName: \"kubernetes.io/projected/d4384807-a690-4e84-8b2f-d1f82a6e801b-kube-api-access-nhgbb\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.517985 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.517998 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.518009 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.518020 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.571979 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.631991 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerStarted","Data":"0b4212ecca9b60999638c1e6662994f4b7843d12f33587c1778eba71df434b72"} Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.636542 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c45b5647f-k799c" event={"ID":"d5eb71f6-31df-418a-98dd-11668ff38825","Type":"ContainerDied","Data":"86338dd7d36f9586a8f23b3288040adf41c4f986fc6d17aadaff0853e2749dd7"} Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.636579 5012 scope.go:117] "RemoveContainer" containerID="1740dd45d12f4fba32d28fe0edd137672168109214e3411aa79b0b01fe5420c4" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.636628 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.641637 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6788477597-b25r4" event={"ID":"d4384807-a690-4e84-8b2f-d1f82a6e801b","Type":"ContainerDied","Data":"6d270cd40f45bfefaf4186556b1652f3082a09963c33d3dc4823e1b2c33258e1"} Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.641692 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.643707 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6b154229-6752-44d3-8b53-96147254af19","Type":"ContainerStarted","Data":"653027f305aefd23daa068d7977fcd142c7d791955ffc465b3adbe51a1e997a7"} Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.646353 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" event={"ID":"6c5e24dc-215e-4f19-8cf6-241bf57648f9","Type":"ContainerStarted","Data":"93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38"} Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.647053 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.669540 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" podStartSLOduration=4.669523342 podStartE2EDuration="4.669523342s" podCreationTimestamp="2026-02-19 05:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:00.669392889 +0000 UTC m=+1136.702715468" watchObservedRunningTime="2026-02-19 05:44:00.669523342 +0000 UTC m=+1136.702845911" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.740427 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c45b5647f-k799c"] Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.750525 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5c45b5647f-k799c"] Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.770959 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6788477597-b25r4"] Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.781331 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6788477597-b25r4"] Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.857095 5012 scope.go:117] "RemoveContainer" containerID="0edf70792244ac07bbfc8312a7939b51e2c1f6efdd9a9026a76bb21f0665c246" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.932211 5012 scope.go:117] "RemoveContainer" containerID="21ac8b4f6fffa511d4235a3c327d3a8cd35ac9450d983816320d5195b11ee8bb" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.953883 5012 scope.go:117] "RemoveContainer" containerID="d5440c73e1cd6d63cff9dfa2d45367d2d8fced5e8574ebcc35f43099ef7046cb" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.667186 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6b154229-6752-44d3-8b53-96147254af19","Type":"ContainerStarted","Data":"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a"} Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.668436 5012 generic.go:334] "Generic (PLEG): container finished" podID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerID="ba936c2a2295accf188d98dabc618f0a4eb4fcc0b863a622cffddbfebb246fc3" exitCode=1 Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.668692 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerDied","Data":"ba936c2a2295accf188d98dabc618f0a4eb4fcc0b863a622cffddbfebb246fc3"} Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.669409 5012 scope.go:117] "RemoveContainer" containerID="ba936c2a2295accf188d98dabc618f0a4eb4fcc0b863a622cffddbfebb246fc3" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.674629 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerStarted","Data":"90ba300b50323aa9b522179eb4980608476a719c46e6c6ece43f44fc2dbdc9ad"} Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.679923 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a9c1c12b-f055-417b-9300-706f98b0f8cc","Type":"ContainerStarted","Data":"ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b"} Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.787433 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.793740 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.808261 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.808318 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.821379 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.821989 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.871436 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.882226 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75cc7d9585-x8r8l" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.157:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.157:8080: connect: connection refused" Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.130774 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.696157 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6b154229-6752-44d3-8b53-96147254af19","Type":"ContainerStarted","Data":"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d"} Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.696701 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6b154229-6752-44d3-8b53-96147254af19" containerName="cinder-api-log" containerID="cri-o://9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a" gracePeriod=30 Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.697054 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.697438 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6b154229-6752-44d3-8b53-96147254af19" containerName="cinder-api" containerID="cri-o://4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d" gracePeriod=30 Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.720576 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.719249221 podStartE2EDuration="5.719249221s" podCreationTimestamp="2026-02-19 05:43:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:02.712915172 +0000 UTC m=+1138.746237781" watchObservedRunningTime="2026-02-19 05:44:02.719249221 +0000 UTC m=+1138.752571830" Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.746077 5012 generic.go:334] "Generic (PLEG): container finished" podID="787f8a71-dee4-40d2-b33b-85bcfc58f921" containerID="c3b30cfc4d7788c5bf2800aec00271d7a398ee5903276843825107c74fa7f5b9" exitCode=0 Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.755672 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4384807-a690-4e84-8b2f-d1f82a6e801b" path="/var/lib/kubelet/pods/d4384807-a690-4e84-8b2f-d1f82a6e801b/volumes" Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.757450 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" path="/var/lib/kubelet/pods/d5eb71f6-31df-418a-98dd-11668ff38825/volumes" Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.759066 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerStarted","Data":"4812a8f6df189761983e7fbdb500126b62d33c0b69d53f9becfbce526c3f3865"} Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.759112 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerStarted","Data":"01c17cd2fd8d4c7f25652d74baa178f4238cfbbc1ba02a9f9c5c2148a344aa2a"} Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.759151 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerStarted","Data":"7d42600135c89d15a2ed647cd5fc2d79a4290622986701fbe5330b3c8214cc54"} Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.759166 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-px7xk" event={"ID":"787f8a71-dee4-40d2-b33b-85bcfc58f921","Type":"ContainerDied","Data":"c3b30cfc4d7788c5bf2800aec00271d7a398ee5903276843825107c74fa7f5b9"} Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.759184 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a9c1c12b-f055-417b-9300-706f98b0f8cc","Type":"ContainerStarted","Data":"bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0"} Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.779921 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.780498 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.635617111 podStartE2EDuration="6.78048026s" podCreationTimestamp="2026-02-19 05:43:56 +0000 UTC" firstStartedPulling="2026-02-19 05:43:57.808039655 +0000 UTC m=+1133.841362224" lastFinishedPulling="2026-02-19 05:43:59.952902804 +0000 UTC m=+1135.986225373" observedRunningTime="2026-02-19 05:44:02.780135641 +0000 UTC m=+1138.813458220" watchObservedRunningTime="2026-02-19 05:44:02.78048026 +0000 UTC m=+1138.813802829" Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.808010 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.683617 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.779626 5012 generic.go:334] "Generic (PLEG): container finished" podID="6b154229-6752-44d3-8b53-96147254af19" containerID="4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d" exitCode=0 Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.780170 5012 generic.go:334] "Generic (PLEG): container finished" podID="6b154229-6752-44d3-8b53-96147254af19" containerID="9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a" exitCode=143 Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.780232 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6b154229-6752-44d3-8b53-96147254af19","Type":"ContainerDied","Data":"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d"} Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.780275 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6b154229-6752-44d3-8b53-96147254af19","Type":"ContainerDied","Data":"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a"} Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.780288 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6b154229-6752-44d3-8b53-96147254af19","Type":"ContainerDied","Data":"653027f305aefd23daa068d7977fcd142c7d791955ffc465b3adbe51a1e997a7"} Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.780325 5012 scope.go:117] "RemoveContainer" containerID="4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.780507 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.795192 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxflh\" (UniqueName: \"kubernetes.io/projected/6b154229-6752-44d3-8b53-96147254af19-kube-api-access-rxflh\") pod \"6b154229-6752-44d3-8b53-96147254af19\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.795233 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data-custom\") pod \"6b154229-6752-44d3-8b53-96147254af19\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.795377 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data\") pod \"6b154229-6752-44d3-8b53-96147254af19\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.795398 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b154229-6752-44d3-8b53-96147254af19-etc-machine-id\") pod \"6b154229-6752-44d3-8b53-96147254af19\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.795436 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b154229-6752-44d3-8b53-96147254af19-logs\") pod \"6b154229-6752-44d3-8b53-96147254af19\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.795511 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-scripts\") pod \"6b154229-6752-44d3-8b53-96147254af19\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.795607 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-combined-ca-bundle\") pod \"6b154229-6752-44d3-8b53-96147254af19\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.796716 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b154229-6752-44d3-8b53-96147254af19-logs" (OuterVolumeSpecName: "logs") pod "6b154229-6752-44d3-8b53-96147254af19" (UID: "6b154229-6752-44d3-8b53-96147254af19"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.803294 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b154229-6752-44d3-8b53-96147254af19-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6b154229-6752-44d3-8b53-96147254af19" (UID: "6b154229-6752-44d3-8b53-96147254af19"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.807225 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerStarted","Data":"6762263a345e4365421a46f2f13896eee2b40581b23287e4ae263f9733a40058"} Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.807365 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.809128 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-scripts" (OuterVolumeSpecName: "scripts") pod "6b154229-6752-44d3-8b53-96147254af19" (UID: "6b154229-6752-44d3-8b53-96147254af19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.810575 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6b154229-6752-44d3-8b53-96147254af19" (UID: "6b154229-6752-44d3-8b53-96147254af19"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.823383 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b154229-6752-44d3-8b53-96147254af19-kube-api-access-rxflh" (OuterVolumeSpecName: "kube-api-access-rxflh") pod "6b154229-6752-44d3-8b53-96147254af19" (UID: "6b154229-6752-44d3-8b53-96147254af19"). InnerVolumeSpecName "kube-api-access-rxflh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.845492 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.826426414 podStartE2EDuration="6.845475163s" podCreationTimestamp="2026-02-19 05:43:57 +0000 UTC" firstStartedPulling="2026-02-19 05:44:00.487619311 +0000 UTC m=+1136.520941880" lastFinishedPulling="2026-02-19 05:44:03.50666806 +0000 UTC m=+1139.539990629" observedRunningTime="2026-02-19 05:44:03.84209477 +0000 UTC m=+1139.875417339" watchObservedRunningTime="2026-02-19 05:44:03.845475163 +0000 UTC m=+1139.878797732" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.899609 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxflh\" (UniqueName: \"kubernetes.io/projected/6b154229-6752-44d3-8b53-96147254af19-kube-api-access-rxflh\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.899636 5012 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.899645 5012 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b154229-6752-44d3-8b53-96147254af19-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.899655 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b154229-6752-44d3-8b53-96147254af19-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.899663 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.910908 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data" (OuterVolumeSpecName: "config-data") pod "6b154229-6752-44d3-8b53-96147254af19" (UID: "6b154229-6752-44d3-8b53-96147254af19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.967452 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b154229-6752-44d3-8b53-96147254af19" (UID: "6b154229-6752-44d3-8b53-96147254af19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.004415 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.004447 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.056266 5012 scope.go:117] "RemoveContainer" containerID="9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.100353 5012 scope.go:117] "RemoveContainer" containerID="4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d" Feb 19 05:44:04 crc kubenswrapper[5012]: E0219 05:44:04.100973 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d\": container with ID starting with 4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d not found: ID does not exist" containerID="4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.101064 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d"} err="failed to get container status \"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d\": rpc error: code = NotFound desc = could not find container \"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d\": container with ID starting with 4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d not found: ID does not exist" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.101124 5012 scope.go:117] "RemoveContainer" containerID="9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a" Feb 19 05:44:04 crc kubenswrapper[5012]: E0219 05:44:04.121700 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a\": container with ID starting with 9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a not found: ID does not exist" containerID="9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.121836 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a"} err="failed to get container status \"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a\": rpc error: code = NotFound desc = could not find container \"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a\": container with ID starting with 9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a not found: ID does not exist" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.121869 5012 scope.go:117] "RemoveContainer" containerID="4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.126245 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d"} err="failed to get container status \"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d\": rpc error: code = NotFound desc = could not find container \"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d\": container with ID starting with 4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d not found: ID does not exist" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.126294 5012 scope.go:117] "RemoveContainer" containerID="9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.127728 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a"} err="failed to get container status \"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a\": rpc error: code = NotFound desc = could not find container \"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a\": container with ID starting with 9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a not found: ID does not exist" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.168820 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.183180 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.189848 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:44:04 crc kubenswrapper[5012]: E0219 05:44:04.190357 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" containerName="horizon" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.190460 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" containerName="horizon" Feb 19 05:44:04 crc kubenswrapper[5012]: E0219 05:44:04.190524 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" containerName="horizon-log" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.190594 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" containerName="horizon-log" Feb 19 05:44:04 crc kubenswrapper[5012]: E0219 05:44:04.190659 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4384807-a690-4e84-8b2f-d1f82a6e801b" containerName="init" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.190713 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4384807-a690-4e84-8b2f-d1f82a6e801b" containerName="init" Feb 19 05:44:04 crc kubenswrapper[5012]: E0219 05:44:04.191179 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4384807-a690-4e84-8b2f-d1f82a6e801b" containerName="dnsmasq-dns" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.191256 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4384807-a690-4e84-8b2f-d1f82a6e801b" containerName="dnsmasq-dns" Feb 19 05:44:04 crc kubenswrapper[5012]: E0219 05:44:04.191366 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b154229-6752-44d3-8b53-96147254af19" containerName="cinder-api-log" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.191443 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b154229-6752-44d3-8b53-96147254af19" containerName="cinder-api-log" Feb 19 05:44:04 crc kubenswrapper[5012]: E0219 05:44:04.191516 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b154229-6752-44d3-8b53-96147254af19" containerName="cinder-api" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.191576 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b154229-6752-44d3-8b53-96147254af19" containerName="cinder-api" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.192266 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4384807-a690-4e84-8b2f-d1f82a6e801b" containerName="dnsmasq-dns" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.192375 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b154229-6752-44d3-8b53-96147254af19" containerName="cinder-api-log" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.192473 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b154229-6752-44d3-8b53-96147254af19" containerName="cinder-api" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.192542 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" containerName="horizon" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.192604 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" containerName="horizon-log" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.193814 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.196011 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.198119 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.198383 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.211427 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.230622 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-px7xk" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.321399 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.321767 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.321877 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.321954 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.322025 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c548edc-6755-4310-9b8d-780a384ec6bd-logs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.322098 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c548edc-6755-4310-9b8d-780a384ec6bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.322189 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5spvl\" (UniqueName: \"kubernetes.io/projected/4c548edc-6755-4310-9b8d-780a384ec6bd-kube-api-access-5spvl\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.322264 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-config-data\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.322391 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-scripts\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.426458 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-config\") pod \"787f8a71-dee4-40d2-b33b-85bcfc58f921\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.426997 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s87kc\" (UniqueName: \"kubernetes.io/projected/787f8a71-dee4-40d2-b33b-85bcfc58f921-kube-api-access-s87kc\") pod \"787f8a71-dee4-40d2-b33b-85bcfc58f921\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.427142 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-combined-ca-bundle\") pod \"787f8a71-dee4-40d2-b33b-85bcfc58f921\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.427503 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5spvl\" (UniqueName: \"kubernetes.io/projected/4c548edc-6755-4310-9b8d-780a384ec6bd-kube-api-access-5spvl\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.429329 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-config-data\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.429481 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-scripts\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.429976 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.430114 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.430209 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.430999 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.446776 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c548edc-6755-4310-9b8d-780a384ec6bd-logs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.438822 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.446653 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-config-data\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.447194 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c548edc-6755-4310-9b8d-780a384ec6bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.447270 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c548edc-6755-4310-9b8d-780a384ec6bd-logs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.431691 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/787f8a71-dee4-40d2-b33b-85bcfc58f921-kube-api-access-s87kc" (OuterVolumeSpecName: "kube-api-access-s87kc") pod "787f8a71-dee4-40d2-b33b-85bcfc58f921" (UID: "787f8a71-dee4-40d2-b33b-85bcfc58f921"). InnerVolumeSpecName "kube-api-access-s87kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.447561 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c548edc-6755-4310-9b8d-780a384ec6bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.447722 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.452941 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.454959 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.454998 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-scripts\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.456017 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5spvl\" (UniqueName: \"kubernetes.io/projected/4c548edc-6755-4310-9b8d-780a384ec6bd-kube-api-access-5spvl\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.483007 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "787f8a71-dee4-40d2-b33b-85bcfc58f921" (UID: "787f8a71-dee4-40d2-b33b-85bcfc58f921"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.483521 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-config" (OuterVolumeSpecName: "config") pod "787f8a71-dee4-40d2-b33b-85bcfc58f921" (UID: "787f8a71-dee4-40d2-b33b-85bcfc58f921"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.535595 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.549921 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.549974 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s87kc\" (UniqueName: \"kubernetes.io/projected/787f8a71-dee4-40d2-b33b-85bcfc58f921-kube-api-access-s87kc\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.549990 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.758221 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b154229-6752-44d3-8b53-96147254af19" path="/var/lib/kubelet/pods/6b154229-6752-44d3-8b53-96147254af19/volumes" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.862995 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-px7xk" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.863636 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-px7xk" event={"ID":"787f8a71-dee4-40d2-b33b-85bcfc58f921","Type":"ContainerDied","Data":"22e7c478cf5c3572f072dadd10797eb555b9b7702664f2f3d3e6b1d4af431e39"} Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.863715 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22e7c478cf5c3572f072dadd10797eb555b9b7702664f2f3d3e6b1d4af431e39" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.024640 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757c7596dc-4ccqz"] Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.024929 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" podUID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" containerName="dnsmasq-dns" containerID="cri-o://93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38" gracePeriod=10 Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.031201 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.033199 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.060657 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl"] Feb 19 05:44:05 crc kubenswrapper[5012]: E0219 05:44:05.089982 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787f8a71-dee4-40d2-b33b-85bcfc58f921" containerName="neutron-db-sync" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.090145 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="787f8a71-dee4-40d2-b33b-85bcfc58f921" containerName="neutron-db-sync" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.090611 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="787f8a71-dee4-40d2-b33b-85bcfc58f921" containerName="neutron-db-sync" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.103635 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.105544 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl"] Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.275885 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-nb\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.276528 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-svc\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.276640 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-config\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.276722 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-swift-storage-0\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.276934 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-sb\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.277109 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztbnr\" (UniqueName: \"kubernetes.io/projected/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-kube-api-access-ztbnr\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.286851 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77b847d784-sfqqm"] Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.295498 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.298161 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.299218 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.299556 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rtrj8" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.299680 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.325499 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77b847d784-sfqqm"] Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378559 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-svc\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378631 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-config\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378653 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-swift-storage-0\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378725 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm6t4\" (UniqueName: \"kubernetes.io/projected/20fc844f-415a-4c39-b2ac-966ff2a43a43-kube-api-access-cm6t4\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378760 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-ovndb-tls-certs\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378777 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-httpd-config\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378817 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-config\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378859 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-sb\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378889 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztbnr\" (UniqueName: \"kubernetes.io/projected/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-kube-api-access-ztbnr\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378915 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-nb\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378934 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-combined-ca-bundle\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.379757 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-svc\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.380053 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-config\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.380421 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-sb\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.380867 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-nb\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.381067 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-swift-storage-0\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.433720 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztbnr\" (UniqueName: \"kubernetes.io/projected/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-kube-api-access-ztbnr\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.482574 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-ovndb-tls-certs\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.482615 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-httpd-config\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.482673 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-config\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.482728 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-combined-ca-bundle\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.482818 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm6t4\" (UniqueName: \"kubernetes.io/projected/20fc844f-415a-4c39-b2ac-966ff2a43a43-kube-api-access-cm6t4\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.495067 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-combined-ca-bundle\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.495085 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-ovndb-tls-certs\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.495827 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-config\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.498939 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-httpd-config\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.503629 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm6t4\" (UniqueName: \"kubernetes.io/projected/20fc844f-415a-4c39-b2ac-966ff2a43a43-kube-api-access-cm6t4\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.539494 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.652590 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.851509 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.898959 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-nb\") pod \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.899067 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wg28\" (UniqueName: \"kubernetes.io/projected/6c5e24dc-215e-4f19-8cf6-241bf57648f9-kube-api-access-9wg28\") pod \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.899117 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-swift-storage-0\") pod \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.899177 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-svc\") pod \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.899221 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-sb\") pod \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.899321 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-config\") pod \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.927582 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c5e24dc-215e-4f19-8cf6-241bf57648f9-kube-api-access-9wg28" (OuterVolumeSpecName: "kube-api-access-9wg28") pod "6c5e24dc-215e-4f19-8cf6-241bf57648f9" (UID: "6c5e24dc-215e-4f19-8cf6-241bf57648f9"). InnerVolumeSpecName "kube-api-access-9wg28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.957461 5012 generic.go:334] "Generic (PLEG): container finished" podID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" containerID="93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38" exitCode=0 Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.957528 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" event={"ID":"6c5e24dc-215e-4f19-8cf6-241bf57648f9","Type":"ContainerDied","Data":"93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38"} Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.957574 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" event={"ID":"6c5e24dc-215e-4f19-8cf6-241bf57648f9","Type":"ContainerDied","Data":"ead40496902b159e9bebd9ba1a479551b8997a76aa96d1285d684eafe66d05a5"} Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.957592 5012 scope.go:117] "RemoveContainer" containerID="93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.957708 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.982979 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl"] Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.995551 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4c548edc-6755-4310-9b8d-780a384ec6bd","Type":"ContainerStarted","Data":"81b8a8622fe11df4ede94ace220c18a385b1e8288789ef6c75d156fafc627131"} Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.011623 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wg28\" (UniqueName: \"kubernetes.io/projected/6c5e24dc-215e-4f19-8cf6-241bf57648f9-kube-api-access-9wg28\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.026059 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6c5e24dc-215e-4f19-8cf6-241bf57648f9" (UID: "6c5e24dc-215e-4f19-8cf6-241bf57648f9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.044241 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-config" (OuterVolumeSpecName: "config") pod "6c5e24dc-215e-4f19-8cf6-241bf57648f9" (UID: "6c5e24dc-215e-4f19-8cf6-241bf57648f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.045023 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c5e24dc-215e-4f19-8cf6-241bf57648f9" (UID: "6c5e24dc-215e-4f19-8cf6-241bf57648f9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.101337 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c5e24dc-215e-4f19-8cf6-241bf57648f9" (UID: "6c5e24dc-215e-4f19-8cf6-241bf57648f9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.121665 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.121708 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.121720 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.121730 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.191948 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c5e24dc-215e-4f19-8cf6-241bf57648f9" (UID: "6c5e24dc-215e-4f19-8cf6-241bf57648f9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.208995 5012 scope.go:117] "RemoveContainer" containerID="b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.224856 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.231650 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77b847d784-sfqqm"] Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.275401 5012 scope.go:117] "RemoveContainer" containerID="93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38" Feb 19 05:44:06 crc kubenswrapper[5012]: E0219 05:44:06.276796 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38\": container with ID starting with 93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38 not found: ID does not exist" containerID="93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.278087 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38"} err="failed to get container status \"93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38\": rpc error: code = NotFound desc = could not find container \"93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38\": container with ID starting with 93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38 not found: ID does not exist" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.278180 5012 scope.go:117] "RemoveContainer" containerID="b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8" Feb 19 05:44:06 crc kubenswrapper[5012]: E0219 05:44:06.279415 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8\": container with ID starting with b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8 not found: ID does not exist" containerID="b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.279755 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8"} err="failed to get container status \"b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8\": rpc error: code = NotFound desc = could not find container \"b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8\": container with ID starting with b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8 not found: ID does not exist" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.300352 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757c7596dc-4ccqz"] Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.309642 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757c7596dc-4ccqz"] Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.719014 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" path="/var/lib/kubelet/pods/6c5e24dc-215e-4f19-8cf6-241bf57648f9/volumes" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.003618 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4c548edc-6755-4310-9b8d-780a384ec6bd","Type":"ContainerStarted","Data":"891d0dc26068ff29bd823164af1de54e6f4a7ac97540d918ded71559f4c5a68e"} Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.005862 5012 generic.go:334] "Generic (PLEG): container finished" podID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerID="4812a8f6df189761983e7fbdb500126b62d33c0b69d53f9becfbce526c3f3865" exitCode=1 Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.005917 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerDied","Data":"4812a8f6df189761983e7fbdb500126b62d33c0b69d53f9becfbce526c3f3865"} Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.005945 5012 scope.go:117] "RemoveContainer" containerID="ba936c2a2295accf188d98dabc618f0a4eb4fcc0b863a622cffddbfebb246fc3" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.007278 5012 scope.go:117] "RemoveContainer" containerID="4812a8f6df189761983e7fbdb500126b62d33c0b69d53f9becfbce526c3f3865" Feb 19 05:44:07 crc kubenswrapper[5012]: E0219 05:44:07.008013 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(7fdaa495-6cde-409a-871a-e334ca3f2a91)\"" pod="openstack/watcher-decision-engine-0" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.040343 5012 generic.go:334] "Generic (PLEG): container finished" podID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" containerID="ca6a3289326a3d74df11835a9c2f296bc10d31bbffc5d5c69c448a3f93f521ea" exitCode=0 Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.040433 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" event={"ID":"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e","Type":"ContainerDied","Data":"ca6a3289326a3d74df11835a9c2f296bc10d31bbffc5d5c69c448a3f93f521ea"} Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.040468 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" event={"ID":"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e","Type":"ContainerStarted","Data":"5892f4877b405b9244dd43361effc1a470655536dbd633845dd04bd643dbfba5"} Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.046032 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b847d784-sfqqm" event={"ID":"20fc844f-415a-4c39-b2ac-966ff2a43a43","Type":"ContainerStarted","Data":"6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f"} Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.046113 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b847d784-sfqqm" event={"ID":"20fc844f-415a-4c39-b2ac-966ff2a43a43","Type":"ContainerStarted","Data":"9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a"} Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.046125 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b847d784-sfqqm" event={"ID":"20fc844f-415a-4c39-b2ac-966ff2a43a43","Type":"ContainerStarted","Data":"05d6404e6cfe0f5924141acac1a5c449939eddf44dc7eb77958158988b1bb5ee"} Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.046196 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.125830 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77b847d784-sfqqm" podStartSLOduration=2.125808324 podStartE2EDuration="2.125808324s" podCreationTimestamp="2026-02-19 05:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:07.079571042 +0000 UTC m=+1143.112893611" watchObservedRunningTime="2026-02-19 05:44:07.125808324 +0000 UTC m=+1143.159130893" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.150730 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.303876 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.307557 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.346080 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.419749 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-778557f86b-hp4xf"] Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.420348 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-778557f86b-hp4xf" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api-log" containerID="cri-o://fe85e93188d20a0757f4ff89e6ad6e7cd4a5a7fc9569c748b0fe68bce7f50e89" gracePeriod=30 Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.420726 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-778557f86b-hp4xf" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api" containerID="cri-o://0a1428fe2110ceec4a472e101ab178eb05366af10098eb515e5229853c308ba9" gracePeriod=30 Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.060426 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" event={"ID":"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e","Type":"ContainerStarted","Data":"0665dea2b78f255d6fbccb798f4cfaab479a2e00f62ee271920f433e530bc5cb"} Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.060672 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.062600 5012 generic.go:334] "Generic (PLEG): container finished" podID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerID="fe85e93188d20a0757f4ff89e6ad6e7cd4a5a7fc9569c748b0fe68bce7f50e89" exitCode=143 Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.062656 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778557f86b-hp4xf" event={"ID":"6dfce017-0fe6-4613-910b-2c0f88af8bb2","Type":"ContainerDied","Data":"fe85e93188d20a0757f4ff89e6ad6e7cd4a5a7fc9569c748b0fe68bce7f50e89"} Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.064342 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4c548edc-6755-4310-9b8d-780a384ec6bd","Type":"ContainerStarted","Data":"1d296e7a39eecf01a7bb085c9cc72bacf3f971a8d9a82128da5ff4ae87652e7e"} Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.064471 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.088577 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" podStartSLOduration=3.088561229 podStartE2EDuration="3.088561229s" podCreationTimestamp="2026-02-19 05:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:08.079004815 +0000 UTC m=+1144.112327384" watchObservedRunningTime="2026-02-19 05:44:08.088561229 +0000 UTC m=+1144.121883788" Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.140495 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.14047559 podStartE2EDuration="4.14047559s" podCreationTimestamp="2026-02-19 05:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:08.133330495 +0000 UTC m=+1144.166653084" watchObservedRunningTime="2026-02-19 05:44:08.14047559 +0000 UTC m=+1144.173798159" Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.172599 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.075282 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerName="probe" containerID="cri-o://bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0" gracePeriod=30 Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.075951 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerName="cinder-scheduler" containerID="cri-o://ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b" gracePeriod=30 Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.620462 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5ff88b6c7c-5bg66"] Feb 19 05:44:09 crc kubenswrapper[5012]: E0219 05:44:09.621049 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" containerName="init" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.621070 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" containerName="init" Feb 19 05:44:09 crc kubenswrapper[5012]: E0219 05:44:09.621080 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" containerName="dnsmasq-dns" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.621090 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" containerName="dnsmasq-dns" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.621341 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" containerName="dnsmasq-dns" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.622686 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.624952 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.625204 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.650849 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5ff88b6c7c-5bg66"] Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.718652 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-config\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.718739 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-ovndb-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.718765 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-httpd-config\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.718787 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-internal-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.718916 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-combined-ca-bundle\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.718970 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-public-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.718991 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8mbg\" (UniqueName: \"kubernetes.io/projected/eb805277-3dfc-4810-9845-3ba928d262c2-kube-api-access-m8mbg\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.820682 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-combined-ca-bundle\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.820776 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-public-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.820805 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8mbg\" (UniqueName: \"kubernetes.io/projected/eb805277-3dfc-4810-9845-3ba928d262c2-kube-api-access-m8mbg\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.820849 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-config\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.820903 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-ovndb-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.820927 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-httpd-config\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.820949 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-internal-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.831142 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-httpd-config\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.831167 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-combined-ca-bundle\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.832083 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-public-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.834165 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-internal-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.834958 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-config\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.835556 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-ovndb-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.843929 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8mbg\" (UniqueName: \"kubernetes.io/projected/eb805277-3dfc-4810-9845-3ba928d262c2-kube-api-access-m8mbg\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.932438 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.944369 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.955151 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.960177 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.961998 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.121918 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5c6b5c5b7b-9nnqj"] Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.151518 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.151964 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api" containerID="cri-o://4c7e7897254d29f17ce8fe214986663a24ab7ca2a73051f5e809d6f1daf31a29" gracePeriod=30 Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.151915 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api-log" containerID="cri-o://65be4651ae750a28ef010be6e5423125eee000964a57e57affa6249b22b2eb91" gracePeriod=30 Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.168636 5012 generic.go:334] "Generic (PLEG): container finished" podID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerID="bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0" exitCode=0 Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.169146 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a9c1c12b-f055-417b-9300-706f98b0f8cc","Type":"ContainerDied","Data":"bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0"} Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.808315 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-778557f86b-hp4xf" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": read tcp 10.217.0.2:60328->10.217.0.175:9311: read: connection reset by peer" Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.809384 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-778557f86b-hp4xf" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": read tcp 10.217.0.2:60326->10.217.0.175:9311: read: connection reset by peer" Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.856036 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5ff88b6c7c-5bg66"] Feb 19 05:44:10 crc kubenswrapper[5012]: W0219 05:44:10.880149 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb805277_3dfc_4810_9845_3ba928d262c2.slice/crio-f2b086a9c4d22f99f2517d3e9c47f0f31042cc048d8980b321e79638eb715ed4 WatchSource:0}: Error finding container f2b086a9c4d22f99f2517d3e9c47f0f31042cc048d8980b321e79638eb715ed4: Status 404 returned error can't find the container with id f2b086a9c4d22f99f2517d3e9c47f0f31042cc048d8980b321e79638eb715ed4 Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.195941 5012 generic.go:334] "Generic (PLEG): container finished" podID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerID="0a1428fe2110ceec4a472e101ab178eb05366af10098eb515e5229853c308ba9" exitCode=0 Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.196333 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778557f86b-hp4xf" event={"ID":"6dfce017-0fe6-4613-910b-2c0f88af8bb2","Type":"ContainerDied","Data":"0a1428fe2110ceec4a472e101ab178eb05366af10098eb515e5229853c308ba9"} Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.202246 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ff88b6c7c-5bg66" event={"ID":"eb805277-3dfc-4810-9845-3ba928d262c2","Type":"ContainerStarted","Data":"f2b086a9c4d22f99f2517d3e9c47f0f31042cc048d8980b321e79638eb715ed4"} Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.212697 5012 generic.go:334] "Generic (PLEG): container finished" podID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerID="65be4651ae750a28ef010be6e5423125eee000964a57e57affa6249b22b2eb91" exitCode=143 Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.212734 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"17c5eb4a-b8b3-4178-b5a0-2a37211266e6","Type":"ContainerDied","Data":"65be4651ae750a28ef010be6e5423125eee000964a57e57affa6249b22b2eb91"} Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.212947 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5c6b5c5b7b-9nnqj" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerName="placement-log" containerID="cri-o://f9417f3089ab939acabaf087bdedc14bb6991a7978946e02fec09196a1d9ec1c" gracePeriod=30 Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.212995 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5c6b5c5b7b-9nnqj" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerName="placement-api" containerID="cri-o://1bf5d73af424c2f421bc54586605dbed2a0980894768360700238dc093ac82ff" gracePeriod=30 Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.297598 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.380814 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dfce017-0fe6-4613-910b-2c0f88af8bb2-logs\") pod \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.380982 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data-custom\") pod \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.381020 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-combined-ca-bundle\") pod \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.381078 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data\") pod \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.381108 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j52pf\" (UniqueName: \"kubernetes.io/projected/6dfce017-0fe6-4613-910b-2c0f88af8bb2-kube-api-access-j52pf\") pod \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.382408 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dfce017-0fe6-4613-910b-2c0f88af8bb2-logs" (OuterVolumeSpecName: "logs") pod "6dfce017-0fe6-4613-910b-2c0f88af8bb2" (UID: "6dfce017-0fe6-4613-910b-2c0f88af8bb2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.384515 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dfce017-0fe6-4613-910b-2c0f88af8bb2-kube-api-access-j52pf" (OuterVolumeSpecName: "kube-api-access-j52pf") pod "6dfce017-0fe6-4613-910b-2c0f88af8bb2" (UID: "6dfce017-0fe6-4613-910b-2c0f88af8bb2"). InnerVolumeSpecName "kube-api-access-j52pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.385640 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6dfce017-0fe6-4613-910b-2c0f88af8bb2" (UID: "6dfce017-0fe6-4613-910b-2c0f88af8bb2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.409603 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dfce017-0fe6-4613-910b-2c0f88af8bb2" (UID: "6dfce017-0fe6-4613-910b-2c0f88af8bb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.460190 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data" (OuterVolumeSpecName: "config-data") pod "6dfce017-0fe6-4613-910b-2c0f88af8bb2" (UID: "6dfce017-0fe6-4613-910b-2c0f88af8bb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.482843 5012 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.482872 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.482881 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.482890 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j52pf\" (UniqueName: \"kubernetes.io/projected/6dfce017-0fe6-4613-910b-2c0f88af8bb2-kube-api-access-j52pf\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.482899 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dfce017-0fe6-4613-910b-2c0f88af8bb2-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.808770 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.809208 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.809883 5012 scope.go:117] "RemoveContainer" containerID="4812a8f6df189761983e7fbdb500126b62d33c0b69d53f9becfbce526c3f3865" Feb 19 05:44:11 crc kubenswrapper[5012]: E0219 05:44:11.810085 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(7fdaa495-6cde-409a-871a-e334ca3f2a91)\"" pod="openstack/watcher-decision-engine-0" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.884337 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75cc7d9585-x8r8l" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.157:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.157:8080: connect: connection refused" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.884456 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.058109 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": read tcp 10.217.0.2:46316->10.217.0.169:9322: read: connection reset by peer" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.058114 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": read tcp 10.217.0.2:46314->10.217.0.169:9322: read: connection reset by peer" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.107907 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.232597 5012 generic.go:334] "Generic (PLEG): container finished" podID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerID="4c7e7897254d29f17ce8fe214986663a24ab7ca2a73051f5e809d6f1daf31a29" exitCode=0 Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.232673 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"17c5eb4a-b8b3-4178-b5a0-2a37211266e6","Type":"ContainerDied","Data":"4c7e7897254d29f17ce8fe214986663a24ab7ca2a73051f5e809d6f1daf31a29"} Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.239922 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778557f86b-hp4xf" event={"ID":"6dfce017-0fe6-4613-910b-2c0f88af8bb2","Type":"ContainerDied","Data":"831c1b2e39b299e04f560adb31739eb0da9f5a5165d710984ac8d2ab457658e9"} Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.239957 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.239968 5012 scope.go:117] "RemoveContainer" containerID="0a1428fe2110ceec4a472e101ab178eb05366af10098eb515e5229853c308ba9" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.256406 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ff88b6c7c-5bg66" event={"ID":"eb805277-3dfc-4810-9845-3ba928d262c2","Type":"ContainerStarted","Data":"5baad8992d0e0a0354c70247939ace59bcd61af49dbf633317c1595c364e8821"} Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.256448 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ff88b6c7c-5bg66" event={"ID":"eb805277-3dfc-4810-9845-3ba928d262c2","Type":"ContainerStarted","Data":"853a12ed08e2f2e0f8f4850de102e800017933151f3260846449b2588200be43"} Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.257524 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.262557 5012 generic.go:334] "Generic (PLEG): container finished" podID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerID="f9417f3089ab939acabaf087bdedc14bb6991a7978946e02fec09196a1d9ec1c" exitCode=143 Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.262601 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c6b5c5b7b-9nnqj" event={"ID":"d214ce94-6c65-4641-a1e2-21f5f920ecec","Type":"ContainerDied","Data":"f9417f3089ab939acabaf087bdedc14bb6991a7978946e02fec09196a1d9ec1c"} Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.267020 5012 scope.go:117] "RemoveContainer" containerID="fe85e93188d20a0757f4ff89e6ad6e7cd4a5a7fc9569c748b0fe68bce7f50e89" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.279734 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5ff88b6c7c-5bg66" podStartSLOduration=3.279720115 podStartE2EDuration="3.279720115s" podCreationTimestamp="2026-02-19 05:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:12.276022335 +0000 UTC m=+1148.309344904" watchObservedRunningTime="2026-02-19 05:44:12.279720115 +0000 UTC m=+1148.313042684" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.300541 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-778557f86b-hp4xf"] Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.305619 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-778557f86b-hp4xf"] Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.578852 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.604204 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-logs\") pod \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.604465 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-combined-ca-bundle\") pod \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.604530 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5wsl\" (UniqueName: \"kubernetes.io/projected/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-kube-api-access-m5wsl\") pod \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.604699 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-custom-prometheus-ca\") pod \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.604802 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-config-data\") pod \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.606887 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-logs" (OuterVolumeSpecName: "logs") pod "17c5eb4a-b8b3-4178-b5a0-2a37211266e6" (UID: "17c5eb4a-b8b3-4178-b5a0-2a37211266e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.629719 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-kube-api-access-m5wsl" (OuterVolumeSpecName: "kube-api-access-m5wsl") pod "17c5eb4a-b8b3-4178-b5a0-2a37211266e6" (UID: "17c5eb4a-b8b3-4178-b5a0-2a37211266e6"). InnerVolumeSpecName "kube-api-access-m5wsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.678521 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17c5eb4a-b8b3-4178-b5a0-2a37211266e6" (UID: "17c5eb4a-b8b3-4178-b5a0-2a37211266e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.696865 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "17c5eb4a-b8b3-4178-b5a0-2a37211266e6" (UID: "17c5eb4a-b8b3-4178-b5a0-2a37211266e6"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.708820 5012 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.708847 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.708855 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.708863 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5wsl\" (UniqueName: \"kubernetes.io/projected/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-kube-api-access-m5wsl\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.714447 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-config-data" (OuterVolumeSpecName: "config-data") pod "17c5eb4a-b8b3-4178-b5a0-2a37211266e6" (UID: "17c5eb4a-b8b3-4178-b5a0-2a37211266e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.730842 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" path="/var/lib/kubelet/pods/6dfce017-0fe6-4613-910b-2c0f88af8bb2/volumes" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.811210 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.271236 5012 generic.go:334] "Generic (PLEG): container finished" podID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerID="1bf5d73af424c2f421bc54586605dbed2a0980894768360700238dc093ac82ff" exitCode=0 Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.271327 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c6b5c5b7b-9nnqj" event={"ID":"d214ce94-6c65-4641-a1e2-21f5f920ecec","Type":"ContainerDied","Data":"1bf5d73af424c2f421bc54586605dbed2a0980894768360700238dc093ac82ff"} Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.271758 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c6b5c5b7b-9nnqj" event={"ID":"d214ce94-6c65-4641-a1e2-21f5f920ecec","Type":"ContainerDied","Data":"020e2e77d5547a74ce74ede9f57616121d05cdbb046cf4e2e88cca4fa12f2d3b"} Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.271782 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="020e2e77d5547a74ce74ede9f57616121d05cdbb046cf4e2e88cca4fa12f2d3b" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.273953 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"17c5eb4a-b8b3-4178-b5a0-2a37211266e6","Type":"ContainerDied","Data":"1f8ff58170fed0be8d7680ffb942663aaa5ec3f1c388578dbd28c9e5432c8ac1"} Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.274019 5012 scope.go:117] "RemoveContainer" containerID="4c7e7897254d29f17ce8fe214986663a24ab7ca2a73051f5e809d6f1daf31a29" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.274039 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.351875 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.362772 5012 scope.go:117] "RemoveContainer" containerID="65be4651ae750a28ef010be6e5423125eee000964a57e57affa6249b22b2eb91" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.388216 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.402673 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.420646 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:44:13 crc kubenswrapper[5012]: E0219 05:44:13.420985 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api-log" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421000 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api-log" Feb 19 05:44:13 crc kubenswrapper[5012]: E0219 05:44:13.421010 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421017 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api" Feb 19 05:44:13 crc kubenswrapper[5012]: E0219 05:44:13.421028 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api-log" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421034 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api-log" Feb 19 05:44:13 crc kubenswrapper[5012]: E0219 05:44:13.421052 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerName="placement-api" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421058 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerName="placement-api" Feb 19 05:44:13 crc kubenswrapper[5012]: E0219 05:44:13.421069 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421086 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api" Feb 19 05:44:13 crc kubenswrapper[5012]: E0219 05:44:13.421102 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerName="placement-log" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421107 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerName="placement-log" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421285 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api-log" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421294 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421321 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerName="placement-api" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421331 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api-log" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421346 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerName="placement-log" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421361 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.422430 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-public-tls-certs\") pod \"d214ce94-6c65-4641-a1e2-21f5f920ecec\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.422469 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-internal-tls-certs\") pod \"d214ce94-6c65-4641-a1e2-21f5f920ecec\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.422514 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d214ce94-6c65-4641-a1e2-21f5f920ecec-logs\") pod \"d214ce94-6c65-4641-a1e2-21f5f920ecec\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.422540 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7kbf\" (UniqueName: \"kubernetes.io/projected/d214ce94-6c65-4641-a1e2-21f5f920ecec-kube-api-access-s7kbf\") pod \"d214ce94-6c65-4641-a1e2-21f5f920ecec\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.422561 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-scripts\") pod \"d214ce94-6c65-4641-a1e2-21f5f920ecec\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.422651 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-combined-ca-bundle\") pod \"d214ce94-6c65-4641-a1e2-21f5f920ecec\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.422673 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-config-data\") pod \"d214ce94-6c65-4641-a1e2-21f5f920ecec\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.424668 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d214ce94-6c65-4641-a1e2-21f5f920ecec-logs" (OuterVolumeSpecName: "logs") pod "d214ce94-6c65-4641-a1e2-21f5f920ecec" (UID: "d214ce94-6c65-4641-a1e2-21f5f920ecec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.433173 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.437119 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.437552 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.440131 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.441947 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-scripts" (OuterVolumeSpecName: "scripts") pod "d214ce94-6c65-4641-a1e2-21f5f920ecec" (UID: "d214ce94-6c65-4641-a1e2-21f5f920ecec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.472925 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.507692 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d214ce94-6c65-4641-a1e2-21f5f920ecec-kube-api-access-s7kbf" (OuterVolumeSpecName: "kube-api-access-s7kbf") pod "d214ce94-6c65-4641-a1e2-21f5f920ecec" (UID: "d214ce94-6c65-4641-a1e2-21f5f920ecec"). InnerVolumeSpecName "kube-api-access-s7kbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.525903 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-config-data\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.525995 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.526014 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.526052 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqmhn\" (UniqueName: \"kubernetes.io/projected/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-kube-api-access-nqmhn\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.526071 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-logs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.526094 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.526153 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-public-tls-certs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.526198 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d214ce94-6c65-4641-a1e2-21f5f920ecec-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.526209 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7kbf\" (UniqueName: \"kubernetes.io/projected/d214ce94-6c65-4641-a1e2-21f5f920ecec-kube-api-access-s7kbf\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.526220 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.569198 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-config-data" (OuterVolumeSpecName: "config-data") pod "d214ce94-6c65-4641-a1e2-21f5f920ecec" (UID: "d214ce94-6c65-4641-a1e2-21f5f920ecec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.599178 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d214ce94-6c65-4641-a1e2-21f5f920ecec" (UID: "d214ce94-6c65-4641-a1e2-21f5f920ecec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627602 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-public-tls-certs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627646 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-config-data\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627712 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627729 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627768 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqmhn\" (UniqueName: \"kubernetes.io/projected/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-kube-api-access-nqmhn\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627789 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-logs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627812 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627858 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627873 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.629404 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-logs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.636220 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.636924 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-public-tls-certs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.639735 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.639949 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.640126 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-config-data\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.647849 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqmhn\" (UniqueName: \"kubernetes.io/projected/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-kube-api-access-nqmhn\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.648507 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d214ce94-6c65-4641-a1e2-21f5f920ecec" (UID: "d214ce94-6c65-4641-a1e2-21f5f920ecec"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.654465 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d214ce94-6c65-4641-a1e2-21f5f920ecec" (UID: "d214ce94-6c65-4641-a1e2-21f5f920ecec"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.732001 5012 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.732366 5012 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.827001 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.219329 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.313706 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.313723 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a9c1c12b-f055-417b-9300-706f98b0f8cc","Type":"ContainerDied","Data":"ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b"} Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.313783 5012 scope.go:117] "RemoveContainer" containerID="bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.313702 5012 generic.go:334] "Generic (PLEG): container finished" podID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerID="ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b" exitCode=0 Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.314400 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a9c1c12b-f055-417b-9300-706f98b0f8cc","Type":"ContainerDied","Data":"700c4e558c2fc29ae4be5133cfa56a73c8a1d1f1fb5ea15ea68c90c99124dbe1"} Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.314442 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.354761 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data\") pod \"a9c1c12b-f055-417b-9300-706f98b0f8cc\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.356794 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data-custom\") pod \"a9c1c12b-f055-417b-9300-706f98b0f8cc\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.356887 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-combined-ca-bundle\") pod \"a9c1c12b-f055-417b-9300-706f98b0f8cc\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.356911 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-scripts\") pod \"a9c1c12b-f055-417b-9300-706f98b0f8cc\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.356935 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cch2x\" (UniqueName: \"kubernetes.io/projected/a9c1c12b-f055-417b-9300-706f98b0f8cc-kube-api-access-cch2x\") pod \"a9c1c12b-f055-417b-9300-706f98b0f8cc\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.356998 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9c1c12b-f055-417b-9300-706f98b0f8cc-etc-machine-id\") pod \"a9c1c12b-f055-417b-9300-706f98b0f8cc\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.360677 5012 scope.go:117] "RemoveContainer" containerID="ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.362557 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c1c12b-f055-417b-9300-706f98b0f8cc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a9c1c12b-f055-417b-9300-706f98b0f8cc" (UID: "a9c1c12b-f055-417b-9300-706f98b0f8cc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.365075 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c1c12b-f055-417b-9300-706f98b0f8cc-kube-api-access-cch2x" (OuterVolumeSpecName: "kube-api-access-cch2x") pod "a9c1c12b-f055-417b-9300-706f98b0f8cc" (UID: "a9c1c12b-f055-417b-9300-706f98b0f8cc"). InnerVolumeSpecName "kube-api-access-cch2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.375803 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a9c1c12b-f055-417b-9300-706f98b0f8cc" (UID: "a9c1c12b-f055-417b-9300-706f98b0f8cc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.379951 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5c6b5c5b7b-9nnqj"] Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.384443 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-scripts" (OuterVolumeSpecName: "scripts") pod "a9c1c12b-f055-417b-9300-706f98b0f8cc" (UID: "a9c1c12b-f055-417b-9300-706f98b0f8cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.391890 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5c6b5c5b7b-9nnqj"] Feb 19 05:44:14 crc kubenswrapper[5012]: W0219 05:44:14.440958 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d74d5de_7e1d_47cc_8aaa_cb303332a03a.slice/crio-5a97027ceaa1b3242b364b1c3887838d74e75f243a870b871c16fde1bed831b3 WatchSource:0}: Error finding container 5a97027ceaa1b3242b364b1c3887838d74e75f243a870b871c16fde1bed831b3: Status 404 returned error can't find the container with id 5a97027ceaa1b3242b364b1c3887838d74e75f243a870b871c16fde1bed831b3 Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.460060 5012 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9c1c12b-f055-417b-9300-706f98b0f8cc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.460095 5012 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.460109 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.460120 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cch2x\" (UniqueName: \"kubernetes.io/projected/a9c1c12b-f055-417b-9300-706f98b0f8cc-kube-api-access-cch2x\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.480461 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.501549 5012 scope.go:117] "RemoveContainer" containerID="bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.501552 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9c1c12b-f055-417b-9300-706f98b0f8cc" (UID: "a9c1c12b-f055-417b-9300-706f98b0f8cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:14 crc kubenswrapper[5012]: E0219 05:44:14.502486 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0\": container with ID starting with bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0 not found: ID does not exist" containerID="bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.502546 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0"} err="failed to get container status \"bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0\": rpc error: code = NotFound desc = could not find container \"bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0\": container with ID starting with bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0 not found: ID does not exist" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.502576 5012 scope.go:117] "RemoveContainer" containerID="ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b" Feb 19 05:44:14 crc kubenswrapper[5012]: E0219 05:44:14.505055 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b\": container with ID starting with ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b not found: ID does not exist" containerID="ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.505079 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b"} err="failed to get container status \"ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b\": rpc error: code = NotFound desc = could not find container \"ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b\": container with ID starting with ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b not found: ID does not exist" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.535292 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data" (OuterVolumeSpecName: "config-data") pod "a9c1c12b-f055-417b-9300-706f98b0f8cc" (UID: "a9c1c12b-f055-417b-9300-706f98b0f8cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.561709 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.561738 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.650391 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.659593 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.679829 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:44:14 crc kubenswrapper[5012]: E0219 05:44:14.680313 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerName="cinder-scheduler" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.680333 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerName="cinder-scheduler" Feb 19 05:44:14 crc kubenswrapper[5012]: E0219 05:44:14.680344 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerName="probe" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.680353 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerName="probe" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.680650 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerName="cinder-scheduler" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.680667 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerName="probe" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.681861 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.685862 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.743153 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" path="/var/lib/kubelet/pods/17c5eb4a-b8b3-4178-b5a0-2a37211266e6/volumes" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.743783 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" path="/var/lib/kubelet/pods/a9c1c12b-f055-417b-9300-706f98b0f8cc/volumes" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.745369 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" path="/var/lib/kubelet/pods/d214ce94-6c65-4641-a1e2-21f5f920ecec/volumes" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.746404 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.768917 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vtfb\" (UniqueName: \"kubernetes.io/projected/42946b07-c256-43a7-99d0-45f94c019663-kube-api-access-2vtfb\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.769024 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42946b07-c256-43a7-99d0-45f94c019663-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.769089 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-config-data\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.769148 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.769168 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-scripts\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.769191 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.870872 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vtfb\" (UniqueName: \"kubernetes.io/projected/42946b07-c256-43a7-99d0-45f94c019663-kube-api-access-2vtfb\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.870954 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42946b07-c256-43a7-99d0-45f94c019663-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.871033 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-config-data\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.871070 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42946b07-c256-43a7-99d0-45f94c019663-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.871096 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.871176 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-scripts\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.871240 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.875729 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-scripts\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.878139 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-config-data\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.878950 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.883749 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.888424 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vtfb\" (UniqueName: \"kubernetes.io/projected/42946b07-c256-43a7-99d0-45f94c019663-kube-api-access-2vtfb\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.000672 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.346500 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7d74d5de-7e1d-47cc-8aaa-cb303332a03a","Type":"ContainerStarted","Data":"a7908398478d5b196be10ac474c1cbebad2ba060379ae3af8ceb4482a8c331ad"} Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.346862 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7d74d5de-7e1d-47cc-8aaa-cb303332a03a","Type":"ContainerStarted","Data":"8a1bbdc39025fc8ea5f32cc89279b1b49872252c88e364ca4a448083da327fe8"} Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.346874 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7d74d5de-7e1d-47cc-8aaa-cb303332a03a","Type":"ContainerStarted","Data":"5a97027ceaa1b3242b364b1c3887838d74e75f243a870b871c16fde1bed831b3"} Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.348414 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.414205 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.414183377 podStartE2EDuration="2.414183377s" podCreationTimestamp="2026-02-19 05:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:15.378163506 +0000 UTC m=+1151.411486075" watchObservedRunningTime="2026-02-19 05:44:15.414183377 +0000 UTC m=+1151.447505946" Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.544353 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.619889 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69cc8c4d6f-zkg8h"] Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.620142 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" podUID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" containerName="dnsmasq-dns" containerID="cri-o://cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23" gracePeriod=10 Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.696494 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:44:15 crc kubenswrapper[5012]: W0219 05:44:15.726473 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42946b07_c256_43a7_99d0_45f94c019663.slice/crio-4d49f99cf63c7a4469218344357949aff2f5db7cd48450372ee18d0677e9d8bf WatchSource:0}: Error finding container 4d49f99cf63c7a4469218344357949aff2f5db7cd48450372ee18d0677e9d8bf: Status 404 returned error can't find the container with id 4d49f99cf63c7a4469218344357949aff2f5db7cd48450372ee18d0677e9d8bf Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.065445 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.067136 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.069507 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.070598 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-99wcv" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.077531 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.077891 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.098925 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hqg\" (UniqueName: \"kubernetes.io/projected/75258dbe-c223-4e55-92a6-8e588745294a-kube-api-access-54hqg\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.098991 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75258dbe-c223-4e55-92a6-8e588745294a-openstack-config-secret\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.099032 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75258dbe-c223-4e55-92a6-8e588745294a-openstack-config\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.099053 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75258dbe-c223-4e55-92a6-8e588745294a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.199733 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54hqg\" (UniqueName: \"kubernetes.io/projected/75258dbe-c223-4e55-92a6-8e588745294a-kube-api-access-54hqg\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.199797 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75258dbe-c223-4e55-92a6-8e588745294a-openstack-config-secret\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.199837 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75258dbe-c223-4e55-92a6-8e588745294a-openstack-config\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.199859 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75258dbe-c223-4e55-92a6-8e588745294a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.203017 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75258dbe-c223-4e55-92a6-8e588745294a-openstack-config\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.215882 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75258dbe-c223-4e55-92a6-8e588745294a-openstack-config-secret\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.219492 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54hqg\" (UniqueName: \"kubernetes.io/projected/75258dbe-c223-4e55-92a6-8e588745294a-kube-api-access-54hqg\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.227891 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75258dbe-c223-4e55-92a6-8e588745294a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.319261 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.396179 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.405865 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-sb\") pod \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.405975 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-swift-storage-0\") pod \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406025 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-nb\") pod \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406058 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-config\") pod \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406120 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frcn7\" (UniqueName: \"kubernetes.io/projected/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-kube-api-access-frcn7\") pod \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406161 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-svc\") pod \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406459 5012 generic.go:334] "Generic (PLEG): container finished" podID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" containerID="cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23" exitCode=0 Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406506 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" event={"ID":"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7","Type":"ContainerDied","Data":"cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23"} Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406534 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" event={"ID":"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7","Type":"ContainerDied","Data":"a1291378cdde1b6340e354ff4d89e75f3fa2d7a84c8a3f64370b1decfc0c8b1c"} Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406549 5012 scope.go:117] "RemoveContainer" containerID="cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406628 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.456258 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42946b07-c256-43a7-99d0-45f94c019663","Type":"ContainerStarted","Data":"4d49f99cf63c7a4469218344357949aff2f5db7cd48450372ee18d0677e9d8bf"} Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.481604 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-kube-api-access-frcn7" (OuterVolumeSpecName: "kube-api-access-frcn7") pod "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" (UID: "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7"). InnerVolumeSpecName "kube-api-access-frcn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.512664 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frcn7\" (UniqueName: \"kubernetes.io/projected/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-kube-api-access-frcn7\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.579137 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" (UID: "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.605295 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" (UID: "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.614081 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" (UID: "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.614548 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.614562 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.614573 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.615775 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" (UID: "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.661596 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-config" (OuterVolumeSpecName: "config") pod "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" (UID: "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.670743 5012 scope.go:117] "RemoveContainer" containerID="46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.716259 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.716295 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.719107 5012 scope.go:117] "RemoveContainer" containerID="cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23" Feb 19 05:44:16 crc kubenswrapper[5012]: E0219 05:44:16.719851 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23\": container with ID starting with cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23 not found: ID does not exist" containerID="cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.719909 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23"} err="failed to get container status \"cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23\": rpc error: code = NotFound desc = could not find container \"cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23\": container with ID starting with cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23 not found: ID does not exist" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.719934 5012 scope.go:117] "RemoveContainer" containerID="46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe" Feb 19 05:44:16 crc kubenswrapper[5012]: E0219 05:44:16.720799 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe\": container with ID starting with 46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe not found: ID does not exist" containerID="46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.720824 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe"} err="failed to get container status \"46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe\": rpc error: code = NotFound desc = could not find container \"46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe\": container with ID starting with 46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe not found: ID does not exist" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.764586 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69cc8c4d6f-zkg8h"] Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.786241 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69cc8c4d6f-zkg8h"] Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.978180 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 05:44:17 crc kubenswrapper[5012]: I0219 05:44:17.466478 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"75258dbe-c223-4e55-92a6-8e588745294a","Type":"ContainerStarted","Data":"7e7ab391141582e000d1039683e216e4c6d0486f5dd4ddf726f4f452bb59b0db"} Feb 19 05:44:17 crc kubenswrapper[5012]: I0219 05:44:17.472453 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42946b07-c256-43a7-99d0-45f94c019663","Type":"ContainerStarted","Data":"47a3ea34ecfaad01acb97532a27081ad24ab168ffd93d9eb4032625cfdc5a3fd"} Feb 19 05:44:17 crc kubenswrapper[5012]: I0219 05:44:17.477369 5012 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 05:44:17 crc kubenswrapper[5012]: I0219 05:44:17.883080 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 05:44:18 crc kubenswrapper[5012]: I0219 05:44:18.497329 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42946b07-c256-43a7-99d0-45f94c019663","Type":"ContainerStarted","Data":"4b262b6a4abbd218c72a86b7b7bea169f6aaad28e2988ee8eea6494fba91952a"} Feb 19 05:44:18 crc kubenswrapper[5012]: I0219 05:44:18.529038 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.529015128 podStartE2EDuration="4.529015128s" podCreationTimestamp="2026-02-19 05:44:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:18.522680023 +0000 UTC m=+1154.556002582" watchObservedRunningTime="2026-02-19 05:44:18.529015128 +0000 UTC m=+1154.562337697" Feb 19 05:44:18 crc kubenswrapper[5012]: I0219 05:44:18.717187 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" path="/var/lib/kubelet/pods/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7/volumes" Feb 19 05:44:18 crc kubenswrapper[5012]: I0219 05:44:18.761680 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 05:44:18 crc kubenswrapper[5012]: I0219 05:44:18.828082 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.001793 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.527784 5012 generic.go:334] "Generic (PLEG): container finished" podID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerID="3fcdc6a7de1157e87df26c6381be0f82492f8c4422bc5e6ab2f42667c4a696ee" exitCode=137 Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.527870 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75cc7d9585-x8r8l" event={"ID":"7c163961-185c-418b-a0f5-a4d55b59f3ec","Type":"ContainerDied","Data":"3fcdc6a7de1157e87df26c6381be0f82492f8c4422bc5e6ab2f42667c4a696ee"} Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.528198 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75cc7d9585-x8r8l" event={"ID":"7c163961-185c-418b-a0f5-a4d55b59f3ec","Type":"ContainerDied","Data":"7cfa7cf48e4edcddab8aec2d0bfb0aeea8557ac2316f0d4b2e00c1aa2310cba1"} Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.528212 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cfa7cf48e4edcddab8aec2d0bfb0aeea8557ac2316f0d4b2e00c1aa2310cba1" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.623271 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.768366 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c163961-185c-418b-a0f5-a4d55b59f3ec-logs\") pod \"7c163961-185c-418b-a0f5-a4d55b59f3ec\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.768537 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-scripts\") pod \"7c163961-185c-418b-a0f5-a4d55b59f3ec\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.768634 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c163961-185c-418b-a0f5-a4d55b59f3ec-horizon-secret-key\") pod \"7c163961-185c-418b-a0f5-a4d55b59f3ec\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.768762 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9sfn\" (UniqueName: \"kubernetes.io/projected/7c163961-185c-418b-a0f5-a4d55b59f3ec-kube-api-access-d9sfn\") pod \"7c163961-185c-418b-a0f5-a4d55b59f3ec\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.768785 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-config-data\") pod \"7c163961-185c-418b-a0f5-a4d55b59f3ec\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.773705 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c163961-185c-418b-a0f5-a4d55b59f3ec-logs" (OuterVolumeSpecName: "logs") pod "7c163961-185c-418b-a0f5-a4d55b59f3ec" (UID: "7c163961-185c-418b-a0f5-a4d55b59f3ec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.778547 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c163961-185c-418b-a0f5-a4d55b59f3ec-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7c163961-185c-418b-a0f5-a4d55b59f3ec" (UID: "7c163961-185c-418b-a0f5-a4d55b59f3ec"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.778585 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c163961-185c-418b-a0f5-a4d55b59f3ec-kube-api-access-d9sfn" (OuterVolumeSpecName: "kube-api-access-d9sfn") pod "7c163961-185c-418b-a0f5-a4d55b59f3ec" (UID: "7c163961-185c-418b-a0f5-a4d55b59f3ec"). InnerVolumeSpecName "kube-api-access-d9sfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.797347 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-config-data" (OuterVolumeSpecName: "config-data") pod "7c163961-185c-418b-a0f5-a4d55b59f3ec" (UID: "7c163961-185c-418b-a0f5-a4d55b59f3ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.805892 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-scripts" (OuterVolumeSpecName: "scripts") pod "7c163961-185c-418b-a0f5-a4d55b59f3ec" (UID: "7c163961-185c-418b-a0f5-a4d55b59f3ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.875507 5012 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c163961-185c-418b-a0f5-a4d55b59f3ec-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.875540 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9sfn\" (UniqueName: \"kubernetes.io/projected/7c163961-185c-418b-a0f5-a4d55b59f3ec-kube-api-access-d9sfn\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.875554 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.875563 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c163961-185c-418b-a0f5-a4d55b59f3ec-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.875571 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.235393 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-59bfbf7475-v98h9"] Feb 19 05:44:21 crc kubenswrapper[5012]: E0219 05:44:21.235747 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" containerName="dnsmasq-dns" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.235760 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" containerName="dnsmasq-dns" Feb 19 05:44:21 crc kubenswrapper[5012]: E0219 05:44:21.235775 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.235781 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon" Feb 19 05:44:21 crc kubenswrapper[5012]: E0219 05:44:21.235805 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon-log" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.235814 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon-log" Feb 19 05:44:21 crc kubenswrapper[5012]: E0219 05:44:21.235829 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" containerName="init" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.235835 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" containerName="init" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.236003 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon-log" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.236030 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" containerName="dnsmasq-dns" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.236043 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.236945 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.239975 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.240272 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.243043 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.262940 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-59bfbf7475-v98h9"] Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.387362 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c9aa274-240d-4d50-b38a-754dd493f351-log-httpd\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.387424 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwrcf\" (UniqueName: \"kubernetes.io/projected/4c9aa274-240d-4d50-b38a-754dd493f351-kube-api-access-lwrcf\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.387475 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c9aa274-240d-4d50-b38a-754dd493f351-run-httpd\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.387516 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-combined-ca-bundle\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.387605 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-internal-tls-certs\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.387629 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-public-tls-certs\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.387667 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-config-data\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.387705 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4c9aa274-240d-4d50-b38a-754dd493f351-etc-swift\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.492699 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-combined-ca-bundle\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.492828 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-internal-tls-certs\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.492857 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-public-tls-certs\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.492892 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-config-data\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.492928 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4c9aa274-240d-4d50-b38a-754dd493f351-etc-swift\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.492968 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c9aa274-240d-4d50-b38a-754dd493f351-log-httpd\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.492991 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwrcf\" (UniqueName: \"kubernetes.io/projected/4c9aa274-240d-4d50-b38a-754dd493f351-kube-api-access-lwrcf\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.493022 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c9aa274-240d-4d50-b38a-754dd493f351-run-httpd\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.494662 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c9aa274-240d-4d50-b38a-754dd493f351-log-httpd\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.499233 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-public-tls-certs\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.502031 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c9aa274-240d-4d50-b38a-754dd493f351-run-httpd\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.503380 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4c9aa274-240d-4d50-b38a-754dd493f351-etc-swift\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.504052 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-config-data\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.507872 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-combined-ca-bundle\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.511405 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-internal-tls-certs\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.512100 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwrcf\" (UniqueName: \"kubernetes.io/projected/4c9aa274-240d-4d50-b38a-754dd493f351-kube-api-access-lwrcf\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.538714 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.554878 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.691361 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75cc7d9585-x8r8l"] Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.700458 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75cc7d9585-x8r8l"] Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.810578 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.811839 5012 scope.go:117] "RemoveContainer" containerID="4812a8f6df189761983e7fbdb500126b62d33c0b69d53f9becfbce526c3f3865" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.812560 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.493375 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-59bfbf7475-v98h9"] Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.554614 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59bfbf7475-v98h9" event={"ID":"4c9aa274-240d-4d50-b38a-754dd493f351","Type":"ContainerStarted","Data":"9e326161256ccd442b4abda067251f70672f51e2d1e6574a1100881325273363"} Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.564707 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerStarted","Data":"3fe096d4e76671ad6ed28d2c1acfd3c50b1ec4a14f0f8ab2ef4419008e64c651"} Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.718637 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" path="/var/lib/kubelet/pods/7c163961-185c-418b-a0f5-a4d55b59f3ec/volumes" Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.852786 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.853881 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="ceilometer-central-agent" containerID="cri-o://90ba300b50323aa9b522179eb4980608476a719c46e6c6ece43f44fc2dbdc9ad" gracePeriod=30 Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.854682 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="ceilometer-notification-agent" containerID="cri-o://7d42600135c89d15a2ed647cd5fc2d79a4290622986701fbe5330b3c8214cc54" gracePeriod=30 Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.854704 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="proxy-httpd" containerID="cri-o://6762263a345e4365421a46f2f13896eee2b40581b23287e4ae263f9733a40058" gracePeriod=30 Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.854759 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="sg-core" containerID="cri-o://01c17cd2fd8d4c7f25652d74baa178f4238cfbbc1ba02a9f9c5c2148a344aa2a" gracePeriod=30 Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.881770 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.180:3000/\": EOF" Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.593330 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59bfbf7475-v98h9" event={"ID":"4c9aa274-240d-4d50-b38a-754dd493f351","Type":"ContainerStarted","Data":"dcd35b1e238c144328bbc91eb806c457380d10214f21adb6cef468590b9f0d67"} Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.593375 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59bfbf7475-v98h9" event={"ID":"4c9aa274-240d-4d50-b38a-754dd493f351","Type":"ContainerStarted","Data":"f20697a066eca49cdf077e485aeb577db36c44950fa97a7c16cc76c2e5e2e40b"} Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.594433 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.594497 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.598872 5012 generic.go:334] "Generic (PLEG): container finished" podID="236f420e-8855-41f8-8b25-813be7b28799" containerID="6762263a345e4365421a46f2f13896eee2b40581b23287e4ae263f9733a40058" exitCode=0 Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.598916 5012 generic.go:334] "Generic (PLEG): container finished" podID="236f420e-8855-41f8-8b25-813be7b28799" containerID="01c17cd2fd8d4c7f25652d74baa178f4238cfbbc1ba02a9f9c5c2148a344aa2a" exitCode=2 Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.598924 5012 generic.go:334] "Generic (PLEG): container finished" podID="236f420e-8855-41f8-8b25-813be7b28799" containerID="90ba300b50323aa9b522179eb4980608476a719c46e6c6ece43f44fc2dbdc9ad" exitCode=0 Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.598948 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerDied","Data":"6762263a345e4365421a46f2f13896eee2b40581b23287e4ae263f9733a40058"} Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.599006 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerDied","Data":"01c17cd2fd8d4c7f25652d74baa178f4238cfbbc1ba02a9f9c5c2148a344aa2a"} Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.599019 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerDied","Data":"90ba300b50323aa9b522179eb4980608476a719c46e6c6ece43f44fc2dbdc9ad"} Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.614034 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-59bfbf7475-v98h9" podStartSLOduration=2.6140125320000003 podStartE2EDuration="2.614012532s" podCreationTimestamp="2026-02-19 05:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:23.610023134 +0000 UTC m=+1159.643345713" watchObservedRunningTime="2026-02-19 05:44:23.614012532 +0000 UTC m=+1159.647335101" Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.828556 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.837040 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 19 05:44:24 crc kubenswrapper[5012]: I0219 05:44:24.660030 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 05:44:25 crc kubenswrapper[5012]: I0219 05:44:25.207445 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 05:44:26 crc kubenswrapper[5012]: I0219 05:44:26.688559 5012 generic.go:334] "Generic (PLEG): container finished" podID="236f420e-8855-41f8-8b25-813be7b28799" containerID="7d42600135c89d15a2ed647cd5fc2d79a4290622986701fbe5330b3c8214cc54" exitCode=0 Feb 19 05:44:26 crc kubenswrapper[5012]: I0219 05:44:26.688635 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerDied","Data":"7d42600135c89d15a2ed647cd5fc2d79a4290622986701fbe5330b3c8214cc54"} Feb 19 05:44:26 crc kubenswrapper[5012]: I0219 05:44:26.693902 5012 generic.go:334] "Generic (PLEG): container finished" podID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerID="3fe096d4e76671ad6ed28d2c1acfd3c50b1ec4a14f0f8ab2ef4419008e64c651" exitCode=1 Feb 19 05:44:26 crc kubenswrapper[5012]: I0219 05:44:26.693934 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerDied","Data":"3fe096d4e76671ad6ed28d2c1acfd3c50b1ec4a14f0f8ab2ef4419008e64c651"} Feb 19 05:44:26 crc kubenswrapper[5012]: I0219 05:44:26.693963 5012 scope.go:117] "RemoveContainer" containerID="4812a8f6df189761983e7fbdb500126b62d33c0b69d53f9becfbce526c3f3865" Feb 19 05:44:26 crc kubenswrapper[5012]: I0219 05:44:26.694635 5012 scope.go:117] "RemoveContainer" containerID="3fe096d4e76671ad6ed28d2c1acfd3c50b1ec4a14f0f8ab2ef4419008e64c651" Feb 19 05:44:26 crc kubenswrapper[5012]: E0219 05:44:26.694854 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(7fdaa495-6cde-409a-871a-e334ca3f2a91)\"" pod="openstack/watcher-decision-engine-0" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" Feb 19 05:44:28 crc kubenswrapper[5012]: I0219 05:44:28.130439 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.180:3000/\": dial tcp 10.217.0.180:3000: connect: connection refused" Feb 19 05:44:30 crc kubenswrapper[5012]: I0219 05:44:30.353353 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:44:30 crc kubenswrapper[5012]: I0219 05:44:30.353910 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerName="glance-log" containerID="cri-o://c0addf6cc4fd08d20a94ea77846955a7e25ecc21b7ac41291cb427ac997c6c7a" gracePeriod=30 Feb 19 05:44:30 crc kubenswrapper[5012]: I0219 05:44:30.358437 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerName="glance-httpd" containerID="cri-o://bc599b5c1fd3d067ccfdc4bf4a2aeefedf9b008aba3555832c150c823f5147fd" gracePeriod=30 Feb 19 05:44:30 crc kubenswrapper[5012]: I0219 05:44:30.741082 5012 generic.go:334] "Generic (PLEG): container finished" podID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerID="c0addf6cc4fd08d20a94ea77846955a7e25ecc21b7ac41291cb427ac997c6c7a" exitCode=143 Feb 19 05:44:30 crc kubenswrapper[5012]: I0219 05:44:30.741128 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74c05972-714b-4cc7-97f6-d4a2c205eb08","Type":"ContainerDied","Data":"c0addf6cc4fd08d20a94ea77846955a7e25ecc21b7ac41291cb427ac997c6c7a"} Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.578439 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.604818 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.639606 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.639852 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerName="glance-log" containerID="cri-o://a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5" gracePeriod=30 Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.639906 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerName="glance-httpd" containerID="cri-o://4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46" gracePeriod=30 Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.807988 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.808020 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.808982 5012 scope.go:117] "RemoveContainer" containerID="3fe096d4e76671ad6ed28d2c1acfd3c50b1ec4a14f0f8ab2ef4419008e64c651" Feb 19 05:44:31 crc kubenswrapper[5012]: E0219 05:44:31.809246 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(7fdaa495-6cde-409a-871a-e334ca3f2a91)\"" pod="openstack/watcher-decision-engine-0" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.824287 5012 generic.go:334] "Generic (PLEG): container finished" podID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerID="bc599b5c1fd3d067ccfdc4bf4a2aeefedf9b008aba3555832c150c823f5147fd" exitCode=0 Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.824570 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74c05972-714b-4cc7-97f6-d4a2c205eb08","Type":"ContainerDied","Data":"bc599b5c1fd3d067ccfdc4bf4a2aeefedf9b008aba3555832c150c823f5147fd"} Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.861010 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.942221 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-combined-ca-bundle\") pod \"236f420e-8855-41f8-8b25-813be7b28799\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.942346 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-sg-core-conf-yaml\") pod \"236f420e-8855-41f8-8b25-813be7b28799\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.942445 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52m69\" (UniqueName: \"kubernetes.io/projected/236f420e-8855-41f8-8b25-813be7b28799-kube-api-access-52m69\") pod \"236f420e-8855-41f8-8b25-813be7b28799\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.942545 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-log-httpd\") pod \"236f420e-8855-41f8-8b25-813be7b28799\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.942587 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-scripts\") pod \"236f420e-8855-41f8-8b25-813be7b28799\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.942848 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-run-httpd\") pod \"236f420e-8855-41f8-8b25-813be7b28799\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.942911 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-config-data\") pod \"236f420e-8855-41f8-8b25-813be7b28799\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.943458 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "236f420e-8855-41f8-8b25-813be7b28799" (UID: "236f420e-8855-41f8-8b25-813be7b28799"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.943608 5012 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.947911 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/236f420e-8855-41f8-8b25-813be7b28799-kube-api-access-52m69" (OuterVolumeSpecName: "kube-api-access-52m69") pod "236f420e-8855-41f8-8b25-813be7b28799" (UID: "236f420e-8855-41f8-8b25-813be7b28799"). InnerVolumeSpecName "kube-api-access-52m69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.948261 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "236f420e-8855-41f8-8b25-813be7b28799" (UID: "236f420e-8855-41f8-8b25-813be7b28799"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.948382 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-scripts" (OuterVolumeSpecName: "scripts") pod "236f420e-8855-41f8-8b25-813be7b28799" (UID: "236f420e-8855-41f8-8b25-813be7b28799"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.965793 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.986917 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "236f420e-8855-41f8-8b25-813be7b28799" (UID: "236f420e-8855-41f8-8b25-813be7b28799"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.047909 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5tcx\" (UniqueName: \"kubernetes.io/projected/74c05972-714b-4cc7-97f6-d4a2c205eb08-kube-api-access-q5tcx\") pod \"74c05972-714b-4cc7-97f6-d4a2c205eb08\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.047992 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"74c05972-714b-4cc7-97f6-d4a2c205eb08\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048091 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-logs\") pod \"74c05972-714b-4cc7-97f6-d4a2c205eb08\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048120 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-scripts\") pod \"74c05972-714b-4cc7-97f6-d4a2c205eb08\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048156 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-httpd-run\") pod \"74c05972-714b-4cc7-97f6-d4a2c205eb08\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048198 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-public-tls-certs\") pod \"74c05972-714b-4cc7-97f6-d4a2c205eb08\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048337 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-config-data\") pod \"74c05972-714b-4cc7-97f6-d4a2c205eb08\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048387 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-combined-ca-bundle\") pod \"74c05972-714b-4cc7-97f6-d4a2c205eb08\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048749 5012 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048764 5012 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048774 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52m69\" (UniqueName: \"kubernetes.io/projected/236f420e-8855-41f8-8b25-813be7b28799-kube-api-access-52m69\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048785 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.054247 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74c05972-714b-4cc7-97f6-d4a2c205eb08-kube-api-access-q5tcx" (OuterVolumeSpecName: "kube-api-access-q5tcx") pod "74c05972-714b-4cc7-97f6-d4a2c205eb08" (UID: "74c05972-714b-4cc7-97f6-d4a2c205eb08"). InnerVolumeSpecName "kube-api-access-q5tcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.054601 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-logs" (OuterVolumeSpecName: "logs") pod "74c05972-714b-4cc7-97f6-d4a2c205eb08" (UID: "74c05972-714b-4cc7-97f6-d4a2c205eb08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.058675 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "74c05972-714b-4cc7-97f6-d4a2c205eb08" (UID: "74c05972-714b-4cc7-97f6-d4a2c205eb08"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.058748 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "74c05972-714b-4cc7-97f6-d4a2c205eb08" (UID: "74c05972-714b-4cc7-97f6-d4a2c205eb08"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.060927 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-scripts" (OuterVolumeSpecName: "scripts") pod "74c05972-714b-4cc7-97f6-d4a2c205eb08" (UID: "74c05972-714b-4cc7-97f6-d4a2c205eb08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.062450 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "236f420e-8855-41f8-8b25-813be7b28799" (UID: "236f420e-8855-41f8-8b25-813be7b28799"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.101294 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74c05972-714b-4cc7-97f6-d4a2c205eb08" (UID: "74c05972-714b-4cc7-97f6-d4a2c205eb08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.104155 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-config-data" (OuterVolumeSpecName: "config-data") pod "236f420e-8855-41f8-8b25-813be7b28799" (UID: "236f420e-8855-41f8-8b25-813be7b28799"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.131238 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-config-data" (OuterVolumeSpecName: "config-data") pod "74c05972-714b-4cc7-97f6-d4a2c205eb08" (UID: "74c05972-714b-4cc7-97f6-d4a2c205eb08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154854 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154879 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154889 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5tcx\" (UniqueName: \"kubernetes.io/projected/74c05972-714b-4cc7-97f6-d4a2c205eb08-kube-api-access-q5tcx\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154899 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154918 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154927 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154936 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154944 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154951 5012 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.157822 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "74c05972-714b-4cc7-97f6-d4a2c205eb08" (UID: "74c05972-714b-4cc7-97f6-d4a2c205eb08"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.188703 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.256871 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.256906 5012 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.835956 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"75258dbe-c223-4e55-92a6-8e588745294a","Type":"ContainerStarted","Data":"f641cbad619b4bb09865d3af7634c8b71722cdfa0c947251105b37553a070d26"} Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.840337 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerDied","Data":"0b4212ecca9b60999638c1e6662994f4b7843d12f33587c1778eba71df434b72"} Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.840382 5012 scope.go:117] "RemoveContainer" containerID="6762263a345e4365421a46f2f13896eee2b40581b23287e4ae263f9733a40058" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.840501 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.846386 5012 generic.go:334] "Generic (PLEG): container finished" podID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerID="a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5" exitCode=143 Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.846530 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50127c6b-476e-473a-877d-00fd5feb6bb4","Type":"ContainerDied","Data":"a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5"} Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.850408 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74c05972-714b-4cc7-97f6-d4a2c205eb08","Type":"ContainerDied","Data":"884f09cbda393c2ecb1a2ab4bc0243e004e662fb5c7beaf39c14a2c689ed4fc6"} Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.850510 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.874654 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.378274568 podStartE2EDuration="16.874630558s" podCreationTimestamp="2026-02-19 05:44:16 +0000 UTC" firstStartedPulling="2026-02-19 05:44:17.011292869 +0000 UTC m=+1153.044615438" lastFinishedPulling="2026-02-19 05:44:31.507648869 +0000 UTC m=+1167.540971428" observedRunningTime="2026-02-19 05:44:32.862245347 +0000 UTC m=+1168.895567936" watchObservedRunningTime="2026-02-19 05:44:32.874630558 +0000 UTC m=+1168.907953117" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.924181 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.928008 5012 scope.go:117] "RemoveContainer" containerID="01c17cd2fd8d4c7f25652d74baa178f4238cfbbc1ba02a9f9c5c2148a344aa2a" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.940055 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.960761 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.970567 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.980608 5012 scope.go:117] "RemoveContainer" containerID="7d42600135c89d15a2ed647cd5fc2d79a4290622986701fbe5330b3c8214cc54" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.985557 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:44:32 crc kubenswrapper[5012]: E0219 05:44:32.986216 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="sg-core" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986239 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="sg-core" Feb 19 05:44:32 crc kubenswrapper[5012]: E0219 05:44:32.986261 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerName="glance-log" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986272 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerName="glance-log" Feb 19 05:44:32 crc kubenswrapper[5012]: E0219 05:44:32.986288 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerName="glance-httpd" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986319 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerName="glance-httpd" Feb 19 05:44:32 crc kubenswrapper[5012]: E0219 05:44:32.986342 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="ceilometer-notification-agent" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986350 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="ceilometer-notification-agent" Feb 19 05:44:32 crc kubenswrapper[5012]: E0219 05:44:32.986360 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="proxy-httpd" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986367 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="proxy-httpd" Feb 19 05:44:32 crc kubenswrapper[5012]: E0219 05:44:32.986381 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="ceilometer-central-agent" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986388 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="ceilometer-central-agent" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986616 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="ceilometer-notification-agent" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986641 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="sg-core" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986656 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerName="glance-httpd" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986670 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="ceilometer-central-agent" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986682 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerName="glance-log" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986693 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="proxy-httpd" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.988341 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.996469 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.003664 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.006262 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.036046 5012 scope.go:117] "RemoveContainer" containerID="90ba300b50323aa9b522179eb4980608476a719c46e6c6ece43f44fc2dbdc9ad" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.043161 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.045170 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.046091 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.048723 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.080057 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.080104 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-config-data\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.080184 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.080206 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.080268 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqwm4\" (UniqueName: \"kubernetes.io/projected/8cfddc12-1c4c-4faf-9edb-71fb80608785-kube-api-access-lqwm4\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.080361 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-scripts\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.080385 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cfddc12-1c4c-4faf-9edb-71fb80608785-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.080441 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cfddc12-1c4c-4faf-9edb-71fb80608785-logs\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.086444 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.088033 5012 scope.go:117] "RemoveContainer" containerID="bc599b5c1fd3d067ccfdc4bf4a2aeefedf9b008aba3555832c150c823f5147fd" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.114147 5012 scope.go:117] "RemoveContainer" containerID="c0addf6cc4fd08d20a94ea77846955a7e25ecc21b7ac41291cb427ac997c6c7a" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183286 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqwm4\" (UniqueName: \"kubernetes.io/projected/8cfddc12-1c4c-4faf-9edb-71fb80608785-kube-api-access-lqwm4\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183442 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-config-data\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183488 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-scripts\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183523 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cfddc12-1c4c-4faf-9edb-71fb80608785-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183559 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-run-httpd\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183582 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9h47\" (UniqueName: \"kubernetes.io/projected/1b39b6f2-c394-449b-9c41-1b09eabce119-kube-api-access-m9h47\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183628 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183664 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-scripts\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183690 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cfddc12-1c4c-4faf-9edb-71fb80608785-logs\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183740 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183775 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183802 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-config-data\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183843 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-log-httpd\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183889 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183918 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.185829 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.185935 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cfddc12-1c4c-4faf-9edb-71fb80608785-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.185990 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cfddc12-1c4c-4faf-9edb-71fb80608785-logs\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.191393 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.193746 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-scripts\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.194236 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-config-data\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.204290 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.205289 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqwm4\" (UniqueName: \"kubernetes.io/projected/8cfddc12-1c4c-4faf-9edb-71fb80608785-kube-api-access-lqwm4\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.219913 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.292526 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.292591 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-log-httpd\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.292670 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-config-data\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.292705 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9h47\" (UniqueName: \"kubernetes.io/projected/1b39b6f2-c394-449b-9c41-1b09eabce119-kube-api-access-m9h47\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.292721 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-run-httpd\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.292748 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.292767 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-scripts\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.294907 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-run-httpd\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.300582 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.300948 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-log-httpd\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.308617 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-config-data\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.311040 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9h47\" (UniqueName: \"kubernetes.io/projected/1b39b6f2-c394-449b-9c41-1b09eabce119-kube-api-access-m9h47\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.323722 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-scripts\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.330238 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.341217 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.374485 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.511326 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.600976 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-httpd-run\") pod \"50127c6b-476e-473a-877d-00fd5feb6bb4\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.601145 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-internal-tls-certs\") pod \"50127c6b-476e-473a-877d-00fd5feb6bb4\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.601182 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-logs\") pod \"50127c6b-476e-473a-877d-00fd5feb6bb4\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.601207 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"50127c6b-476e-473a-877d-00fd5feb6bb4\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.601241 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wq\" (UniqueName: \"kubernetes.io/projected/50127c6b-476e-473a-877d-00fd5feb6bb4-kube-api-access-2d4wq\") pod \"50127c6b-476e-473a-877d-00fd5feb6bb4\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.601398 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-scripts\") pod \"50127c6b-476e-473a-877d-00fd5feb6bb4\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.601450 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-config-data\") pod \"50127c6b-476e-473a-877d-00fd5feb6bb4\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.601623 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-combined-ca-bundle\") pod \"50127c6b-476e-473a-877d-00fd5feb6bb4\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.634746 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-logs" (OuterVolumeSpecName: "logs") pod "50127c6b-476e-473a-877d-00fd5feb6bb4" (UID: "50127c6b-476e-473a-877d-00fd5feb6bb4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.637128 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "50127c6b-476e-473a-877d-00fd5feb6bb4" (UID: "50127c6b-476e-473a-877d-00fd5feb6bb4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.640593 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "50127c6b-476e-473a-877d-00fd5feb6bb4" (UID: "50127c6b-476e-473a-877d-00fd5feb6bb4"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.654473 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50127c6b-476e-473a-877d-00fd5feb6bb4-kube-api-access-2d4wq" (OuterVolumeSpecName: "kube-api-access-2d4wq") pod "50127c6b-476e-473a-877d-00fd5feb6bb4" (UID: "50127c6b-476e-473a-877d-00fd5feb6bb4"). InnerVolumeSpecName "kube-api-access-2d4wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.683531 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-scripts" (OuterVolumeSpecName: "scripts") pod "50127c6b-476e-473a-877d-00fd5feb6bb4" (UID: "50127c6b-476e-473a-877d-00fd5feb6bb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.700538 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50127c6b-476e-473a-877d-00fd5feb6bb4" (UID: "50127c6b-476e-473a-877d-00fd5feb6bb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.704643 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.704659 5012 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.704670 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.704690 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.704700 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wq\" (UniqueName: \"kubernetes.io/projected/50127c6b-476e-473a-877d-00fd5feb6bb4-kube-api-access-2d4wq\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.704710 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.769157 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-config-data" (OuterVolumeSpecName: "config-data") pod "50127c6b-476e-473a-877d-00fd5feb6bb4" (UID: "50127c6b-476e-473a-877d-00fd5feb6bb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.793686 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "50127c6b-476e-473a-877d-00fd5feb6bb4" (UID: "50127c6b-476e-473a-877d-00fd5feb6bb4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.808467 5012 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.808501 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.811831 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.916551 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.922485 5012 generic.go:334] "Generic (PLEG): container finished" podID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerID="4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46" exitCode=0 Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.922948 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.924375 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50127c6b-476e-473a-877d-00fd5feb6bb4","Type":"ContainerDied","Data":"4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46"} Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.924468 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50127c6b-476e-473a-877d-00fd5feb6bb4","Type":"ContainerDied","Data":"9f6241f52b36b9304734fa39b59f3e6db469ba06ede3efe69c7f2c281f65bc4e"} Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.924497 5012 scope.go:117] "RemoveContainer" containerID="4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.976454 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.993487 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.008084 5012 scope.go:117] "RemoveContainer" containerID="a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.022691 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.041244 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.053218 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:44:34 crc kubenswrapper[5012]: E0219 05:44:34.053709 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerName="glance-httpd" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.053728 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerName="glance-httpd" Feb 19 05:44:34 crc kubenswrapper[5012]: E0219 05:44:34.053768 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerName="glance-log" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.053777 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerName="glance-log" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.055214 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerName="glance-log" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.055243 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerName="glance-httpd" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.058310 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.060840 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.061087 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.061256 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.070770 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:34 crc kubenswrapper[5012]: W0219 05:44:34.075628 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b39b6f2_c394_449b_9c41_1b09eabce119.slice/crio-f3390505531dfd48ffffb6923d40092eac0cabd7f0dafe8f2f9410a259a092e8 WatchSource:0}: Error finding container f3390505531dfd48ffffb6923d40092eac0cabd7f0dafe8f2f9410a259a092e8: Status 404 returned error can't find the container with id f3390505531dfd48ffffb6923d40092eac0cabd7f0dafe8f2f9410a259a092e8 Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.088054 5012 scope.go:117] "RemoveContainer" containerID="4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46" Feb 19 05:44:34 crc kubenswrapper[5012]: E0219 05:44:34.089041 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46\": container with ID starting with 4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46 not found: ID does not exist" containerID="4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.097450 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46"} err="failed to get container status \"4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46\": rpc error: code = NotFound desc = could not find container \"4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46\": container with ID starting with 4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46 not found: ID does not exist" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.097523 5012 scope.go:117] "RemoveContainer" containerID="a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5" Feb 19 05:44:34 crc kubenswrapper[5012]: E0219 05:44:34.098343 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5\": container with ID starting with a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5 not found: ID does not exist" containerID="a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.098405 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5"} err="failed to get container status \"a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5\": rpc error: code = NotFound desc = could not find container \"a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5\": container with ID starting with a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5 not found: ID does not exist" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.120777 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzhn2\" (UniqueName: \"kubernetes.io/projected/f55309b7-09e5-4496-8995-f03681386729-kube-api-access-hzhn2\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.120832 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.120868 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f55309b7-09e5-4496-8995-f03681386729-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.120887 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f55309b7-09e5-4496-8995-f03681386729-logs\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.120907 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.120962 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.120993 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.121011 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.223787 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzhn2\" (UniqueName: \"kubernetes.io/projected/f55309b7-09e5-4496-8995-f03681386729-kube-api-access-hzhn2\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.223906 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.223945 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f55309b7-09e5-4496-8995-f03681386729-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.223967 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f55309b7-09e5-4496-8995-f03681386729-logs\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.223990 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.224076 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.224137 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.224157 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.224411 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f55309b7-09e5-4496-8995-f03681386729-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.224659 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f55309b7-09e5-4496-8995-f03681386729-logs\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.224781 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.242604 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.242803 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.243530 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.251582 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.252034 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzhn2\" (UniqueName: \"kubernetes.io/projected/f55309b7-09e5-4496-8995-f03681386729-kube-api-access-hzhn2\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.265018 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.385279 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.727946 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="236f420e-8855-41f8-8b25-813be7b28799" path="/var/lib/kubelet/pods/236f420e-8855-41f8-8b25-813be7b28799/volumes" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.729206 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" path="/var/lib/kubelet/pods/50127c6b-476e-473a-877d-00fd5feb6bb4/volumes" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.730582 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" path="/var/lib/kubelet/pods/74c05972-714b-4cc7-97f6-d4a2c205eb08/volumes" Feb 19 05:44:35 crc kubenswrapper[5012]: I0219 05:44:35.012109 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerStarted","Data":"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3"} Feb 19 05:44:35 crc kubenswrapper[5012]: I0219 05:44:35.012563 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerStarted","Data":"f3390505531dfd48ffffb6923d40092eac0cabd7f0dafe8f2f9410a259a092e8"} Feb 19 05:44:35 crc kubenswrapper[5012]: I0219 05:44:35.036543 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:44:35 crc kubenswrapper[5012]: I0219 05:44:35.036973 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8cfddc12-1c4c-4faf-9edb-71fb80608785","Type":"ContainerStarted","Data":"a07a215e56bc0e36f22b891ce490691f5090648c771dc08f0cfc827d1d4d7d16"} Feb 19 05:44:35 crc kubenswrapper[5012]: I0219 05:44:35.037021 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8cfddc12-1c4c-4faf-9edb-71fb80608785","Type":"ContainerStarted","Data":"7a4c697cb6fe382f66c0123db4073f247ee72978d1591752485504ead840944a"} Feb 19 05:44:35 crc kubenswrapper[5012]: I0219 05:44:35.668721 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:36 crc kubenswrapper[5012]: I0219 05:44:36.053265 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8cfddc12-1c4c-4faf-9edb-71fb80608785","Type":"ContainerStarted","Data":"531dcb1099e0ff4ea1b58f4b2eeecebbe921e6e4b7e4593112729de65ca8fada"} Feb 19 05:44:36 crc kubenswrapper[5012]: I0219 05:44:36.055928 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerStarted","Data":"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0"} Feb 19 05:44:36 crc kubenswrapper[5012]: I0219 05:44:36.055972 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerStarted","Data":"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f"} Feb 19 05:44:36 crc kubenswrapper[5012]: I0219 05:44:36.057940 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f55309b7-09e5-4496-8995-f03681386729","Type":"ContainerStarted","Data":"74a78037986e932b26327ff91893dabb73c43e6fd09d9775961dff6280864fb8"} Feb 19 05:44:36 crc kubenswrapper[5012]: I0219 05:44:36.058048 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f55309b7-09e5-4496-8995-f03681386729","Type":"ContainerStarted","Data":"6d126f6471b267813789f286763a4d78d61062c05bc120621e5ce174d1455fe7"} Feb 19 05:44:37 crc kubenswrapper[5012]: I0219 05:44:37.074196 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f55309b7-09e5-4496-8995-f03681386729","Type":"ContainerStarted","Data":"b0e0db07b2277ab3c92bcf3557604a255cfe117c0f4c45cb962fff603c6fb7d1"} Feb 19 05:44:37 crc kubenswrapper[5012]: I0219 05:44:37.092522 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.092507522 podStartE2EDuration="5.092507522s" podCreationTimestamp="2026-02-19 05:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:36.078714609 +0000 UTC m=+1172.112037178" watchObservedRunningTime="2026-02-19 05:44:37.092507522 +0000 UTC m=+1173.125830091" Feb 19 05:44:37 crc kubenswrapper[5012]: I0219 05:44:37.104160 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.104141996 podStartE2EDuration="4.104141996s" podCreationTimestamp="2026-02-19 05:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:37.093554128 +0000 UTC m=+1173.126876697" watchObservedRunningTime="2026-02-19 05:44:37.104141996 +0000 UTC m=+1173.137464565" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.087596 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="ceilometer-central-agent" containerID="cri-o://5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3" gracePeriod=30 Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.087962 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerStarted","Data":"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2"} Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.087998 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.088013 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="proxy-httpd" containerID="cri-o://d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2" gracePeriod=30 Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.088142 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="sg-core" containerID="cri-o://d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0" gracePeriod=30 Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.088178 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="ceilometer-notification-agent" containerID="cri-o://6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f" gracePeriod=30 Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.122936 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.152808624 podStartE2EDuration="6.1229164s" podCreationTimestamp="2026-02-19 05:44:32 +0000 UTC" firstStartedPulling="2026-02-19 05:44:34.088065541 +0000 UTC m=+1170.121388110" lastFinishedPulling="2026-02-19 05:44:37.058173317 +0000 UTC m=+1173.091495886" observedRunningTime="2026-02-19 05:44:38.120621775 +0000 UTC m=+1174.153944364" watchObservedRunningTime="2026-02-19 05:44:38.1229164 +0000 UTC m=+1174.156238969" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.533397 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-j7vgh"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.534570 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.553330 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-j7vgh"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.630789 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-vj27c"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.632040 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.645249 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a4a6-account-create-update-tz4l9"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.646537 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.653961 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.659014 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a4a6-account-create-update-tz4l9"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.666084 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vj27c"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.730182 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsvz7\" (UniqueName: \"kubernetes.io/projected/2fc398d7-f426-420d-981c-6bda415a2ce0-kube-api-access-xsvz7\") pod \"nova-api-db-create-j7vgh\" (UID: \"2fc398d7-f426-420d-981c-6bda415a2ce0\") " pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.730234 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fc398d7-f426-420d-981c-6bda415a2ce0-operator-scripts\") pod \"nova-api-db-create-j7vgh\" (UID: \"2fc398d7-f426-420d-981c-6bda415a2ce0\") " pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.818595 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-9gsgt"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.820050 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.833348 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wchr2\" (UniqueName: \"kubernetes.io/projected/cd4d5a16-81ab-4336-99d5-570d83e4baaa-kube-api-access-wchr2\") pod \"nova-api-a4a6-account-create-update-tz4l9\" (UID: \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\") " pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.833393 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fc398d7-f426-420d-981c-6bda415a2ce0-operator-scripts\") pod \"nova-api-db-create-j7vgh\" (UID: \"2fc398d7-f426-420d-981c-6bda415a2ce0\") " pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.833540 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd4d5a16-81ab-4336-99d5-570d83e4baaa-operator-scripts\") pod \"nova-api-a4a6-account-create-update-tz4l9\" (UID: \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\") " pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.833564 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-operator-scripts\") pod \"nova-cell0-db-create-vj27c\" (UID: \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\") " pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.833592 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsvz7\" (UniqueName: \"kubernetes.io/projected/2fc398d7-f426-420d-981c-6bda415a2ce0-kube-api-access-xsvz7\") pod \"nova-api-db-create-j7vgh\" (UID: \"2fc398d7-f426-420d-981c-6bda415a2ce0\") " pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.833624 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hn7v\" (UniqueName: \"kubernetes.io/projected/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-kube-api-access-6hn7v\") pod \"nova-cell0-db-create-vj27c\" (UID: \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\") " pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.834269 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fc398d7-f426-420d-981c-6bda415a2ce0-operator-scripts\") pod \"nova-api-db-create-j7vgh\" (UID: \"2fc398d7-f426-420d-981c-6bda415a2ce0\") " pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.834690 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b3d3-account-create-update-jv5jh"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.835982 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.838035 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.854358 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b3d3-account-create-update-jv5jh"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.864130 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsvz7\" (UniqueName: \"kubernetes.io/projected/2fc398d7-f426-420d-981c-6bda415a2ce0-kube-api-access-xsvz7\") pod \"nova-api-db-create-j7vgh\" (UID: \"2fc398d7-f426-420d-981c-6bda415a2ce0\") " pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.866034 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9gsgt"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.894023 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.935700 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-operator-scripts\") pod \"nova-cell1-db-create-9gsgt\" (UID: \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\") " pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.935773 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hs9f\" (UniqueName: \"kubernetes.io/projected/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-kube-api-access-5hs9f\") pod \"nova-cell1-db-create-9gsgt\" (UID: \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\") " pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.935863 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd4d5a16-81ab-4336-99d5-570d83e4baaa-operator-scripts\") pod \"nova-api-a4a6-account-create-update-tz4l9\" (UID: \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\") " pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.935887 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-operator-scripts\") pod \"nova-cell0-db-create-vj27c\" (UID: \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\") " pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.935919 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hn7v\" (UniqueName: \"kubernetes.io/projected/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-kube-api-access-6hn7v\") pod \"nova-cell0-db-create-vj27c\" (UID: \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\") " pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.935948 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wchr2\" (UniqueName: \"kubernetes.io/projected/cd4d5a16-81ab-4336-99d5-570d83e4baaa-kube-api-access-wchr2\") pod \"nova-api-a4a6-account-create-update-tz4l9\" (UID: \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\") " pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.936775 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-operator-scripts\") pod \"nova-cell0-db-create-vj27c\" (UID: \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\") " pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.936777 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd4d5a16-81ab-4336-99d5-570d83e4baaa-operator-scripts\") pod \"nova-api-a4a6-account-create-update-tz4l9\" (UID: \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\") " pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.961353 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hn7v\" (UniqueName: \"kubernetes.io/projected/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-kube-api-access-6hn7v\") pod \"nova-cell0-db-create-vj27c\" (UID: \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\") " pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.977015 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.978931 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wchr2\" (UniqueName: \"kubernetes.io/projected/cd4d5a16-81ab-4336-99d5-570d83e4baaa-kube-api-access-wchr2\") pod \"nova-api-a4a6-account-create-update-tz4l9\" (UID: \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\") " pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.994064 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.044895 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwbw6\" (UniqueName: \"kubernetes.io/projected/0b1a4d80-a736-41c3-9157-c0a696c10eff-kube-api-access-bwbw6\") pod \"nova-cell0-b3d3-account-create-update-jv5jh\" (UID: \"0b1a4d80-a736-41c3-9157-c0a696c10eff\") " pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.048484 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-operator-scripts\") pod \"nova-cell1-db-create-9gsgt\" (UID: \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\") " pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.048748 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hs9f\" (UniqueName: \"kubernetes.io/projected/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-kube-api-access-5hs9f\") pod \"nova-cell1-db-create-9gsgt\" (UID: \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\") " pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.048825 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b1a4d80-a736-41c3-9157-c0a696c10eff-operator-scripts\") pod \"nova-cell0-b3d3-account-create-update-jv5jh\" (UID: \"0b1a4d80-a736-41c3-9157-c0a696c10eff\") " pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.055867 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-fc68-account-create-update-tfrzr"] Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.058685 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.058994 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-operator-scripts\") pod \"nova-cell1-db-create-9gsgt\" (UID: \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\") " pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.060608 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.070507 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hs9f\" (UniqueName: \"kubernetes.io/projected/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-kube-api-access-5hs9f\") pod \"nova-cell1-db-create-9gsgt\" (UID: \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\") " pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.095781 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.114854 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-fc68-account-create-update-tfrzr"] Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132389 5012 generic.go:334] "Generic (PLEG): container finished" podID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerID="d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2" exitCode=0 Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132437 5012 generic.go:334] "Generic (PLEG): container finished" podID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerID="d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0" exitCode=2 Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132446 5012 generic.go:334] "Generic (PLEG): container finished" podID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerID="6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f" exitCode=0 Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132456 5012 generic.go:334] "Generic (PLEG): container finished" podID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerID="5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3" exitCode=0 Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132487 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerDied","Data":"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2"} Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132525 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerDied","Data":"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0"} Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132540 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerDied","Data":"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f"} Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132549 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerDied","Data":"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3"} Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132558 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerDied","Data":"f3390505531dfd48ffffb6923d40092eac0cabd7f0dafe8f2f9410a259a092e8"} Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132577 5012 scope.go:117] "RemoveContainer" containerID="d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132761 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.150846 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwbw6\" (UniqueName: \"kubernetes.io/projected/0b1a4d80-a736-41c3-9157-c0a696c10eff-kube-api-access-bwbw6\") pod \"nova-cell0-b3d3-account-create-update-jv5jh\" (UID: \"0b1a4d80-a736-41c3-9157-c0a696c10eff\") " pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.150975 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b1a4d80-a736-41c3-9157-c0a696c10eff-operator-scripts\") pod \"nova-cell0-b3d3-account-create-update-jv5jh\" (UID: \"0b1a4d80-a736-41c3-9157-c0a696c10eff\") " pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.152156 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b1a4d80-a736-41c3-9157-c0a696c10eff-operator-scripts\") pod \"nova-cell0-b3d3-account-create-update-jv5jh\" (UID: \"0b1a4d80-a736-41c3-9157-c0a696c10eff\") " pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.156935 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.171717 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwbw6\" (UniqueName: \"kubernetes.io/projected/0b1a4d80-a736-41c3-9157-c0a696c10eff-kube-api-access-bwbw6\") pod \"nova-cell0-b3d3-account-create-update-jv5jh\" (UID: \"0b1a4d80-a736-41c3-9157-c0a696c10eff\") " pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.220266 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256036 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9h47\" (UniqueName: \"kubernetes.io/projected/1b39b6f2-c394-449b-9c41-1b09eabce119-kube-api-access-m9h47\") pod \"1b39b6f2-c394-449b-9c41-1b09eabce119\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256249 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-run-httpd\") pod \"1b39b6f2-c394-449b-9c41-1b09eabce119\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256287 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-scripts\") pod \"1b39b6f2-c394-449b-9c41-1b09eabce119\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256323 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-log-httpd\") pod \"1b39b6f2-c394-449b-9c41-1b09eabce119\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256370 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-sg-core-conf-yaml\") pod \"1b39b6f2-c394-449b-9c41-1b09eabce119\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256391 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-config-data\") pod \"1b39b6f2-c394-449b-9c41-1b09eabce119\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256419 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-combined-ca-bundle\") pod \"1b39b6f2-c394-449b-9c41-1b09eabce119\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256693 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e98ac0-3018-4566-95b3-2d2dfd3e234e-operator-scripts\") pod \"nova-cell1-fc68-account-create-update-tfrzr\" (UID: \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\") " pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256754 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5svrw\" (UniqueName: \"kubernetes.io/projected/80e98ac0-3018-4566-95b3-2d2dfd3e234e-kube-api-access-5svrw\") pod \"nova-cell1-fc68-account-create-update-tfrzr\" (UID: \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\") " pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.258014 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1b39b6f2-c394-449b-9c41-1b09eabce119" (UID: "1b39b6f2-c394-449b-9c41-1b09eabce119"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.260828 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1b39b6f2-c394-449b-9c41-1b09eabce119" (UID: "1b39b6f2-c394-449b-9c41-1b09eabce119"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.265766 5012 scope.go:117] "RemoveContainer" containerID="d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.265939 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-scripts" (OuterVolumeSpecName: "scripts") pod "1b39b6f2-c394-449b-9c41-1b09eabce119" (UID: "1b39b6f2-c394-449b-9c41-1b09eabce119"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.271017 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b39b6f2-c394-449b-9c41-1b09eabce119-kube-api-access-m9h47" (OuterVolumeSpecName: "kube-api-access-m9h47") pod "1b39b6f2-c394-449b-9c41-1b09eabce119" (UID: "1b39b6f2-c394-449b-9c41-1b09eabce119"). InnerVolumeSpecName "kube-api-access-m9h47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.304448 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1b39b6f2-c394-449b-9c41-1b09eabce119" (UID: "1b39b6f2-c394-449b-9c41-1b09eabce119"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.359293 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e98ac0-3018-4566-95b3-2d2dfd3e234e-operator-scripts\") pod \"nova-cell1-fc68-account-create-update-tfrzr\" (UID: \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\") " pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.359382 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5svrw\" (UniqueName: \"kubernetes.io/projected/80e98ac0-3018-4566-95b3-2d2dfd3e234e-kube-api-access-5svrw\") pod \"nova-cell1-fc68-account-create-update-tfrzr\" (UID: \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\") " pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.359502 5012 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.359513 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.359522 5012 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.359531 5012 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.359541 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9h47\" (UniqueName: \"kubernetes.io/projected/1b39b6f2-c394-449b-9c41-1b09eabce119-kube-api-access-m9h47\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.362682 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e98ac0-3018-4566-95b3-2d2dfd3e234e-operator-scripts\") pod \"nova-cell1-fc68-account-create-update-tfrzr\" (UID: \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\") " pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.378801 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5svrw\" (UniqueName: \"kubernetes.io/projected/80e98ac0-3018-4566-95b3-2d2dfd3e234e-kube-api-access-5svrw\") pod \"nova-cell1-fc68-account-create-update-tfrzr\" (UID: \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\") " pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.399860 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b39b6f2-c394-449b-9c41-1b09eabce119" (UID: "1b39b6f2-c394-449b-9c41-1b09eabce119"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.412465 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-config-data" (OuterVolumeSpecName: "config-data") pod "1b39b6f2-c394-449b-9c41-1b09eabce119" (UID: "1b39b6f2-c394-449b-9c41-1b09eabce119"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.450799 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.462906 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.462945 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.528830 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.536331 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.572426 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:39 crc kubenswrapper[5012]: E0219 05:44:39.572806 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="proxy-httpd" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.572822 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="proxy-httpd" Feb 19 05:44:39 crc kubenswrapper[5012]: E0219 05:44:39.572841 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="sg-core" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.572847 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="sg-core" Feb 19 05:44:39 crc kubenswrapper[5012]: E0219 05:44:39.572866 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="ceilometer-central-agent" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.572872 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="ceilometer-central-agent" Feb 19 05:44:39 crc kubenswrapper[5012]: E0219 05:44:39.572882 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="ceilometer-notification-agent" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.572888 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="ceilometer-notification-agent" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.573052 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="proxy-httpd" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.573068 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="sg-core" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.573085 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="ceilometer-notification-agent" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.573100 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="ceilometer-central-agent" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.580486 5012 scope.go:117] "RemoveContainer" containerID="6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.586253 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.594764 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.594960 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.625379 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-j7vgh"] Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.677647 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:39 crc kubenswrapper[5012]: W0219 05:44:39.724447 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fc398d7_f426_420d_981c_6bda415a2ce0.slice/crio-b5f8ebe3d33920b34ec3478c99b9fa3b4b46afe5cb6d104ea5353e6955b7bf88 WatchSource:0}: Error finding container b5f8ebe3d33920b34ec3478c99b9fa3b4b46afe5cb6d104ea5353e6955b7bf88: Status 404 returned error can't find the container with id b5f8ebe3d33920b34ec3478c99b9fa3b4b46afe5cb6d104ea5353e6955b7bf88 Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.737038 5012 scope.go:117] "RemoveContainer" containerID="5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.764563 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a4a6-account-create-update-tz4l9"] Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.778885 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.778932 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-scripts\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.778963 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-log-httpd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.779557 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.779588 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-config-data\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.779615 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wfvd\" (UniqueName: \"kubernetes.io/projected/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-kube-api-access-2wfvd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.779651 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-run-httpd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.784533 5012 scope.go:117] "RemoveContainer" containerID="d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2" Feb 19 05:44:39 crc kubenswrapper[5012]: E0219 05:44:39.787455 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": container with ID starting with d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2 not found: ID does not exist" containerID="d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.787504 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2"} err="failed to get container status \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": rpc error: code = NotFound desc = could not find container \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": container with ID starting with d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.787532 5012 scope.go:117] "RemoveContainer" containerID="d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0" Feb 19 05:44:39 crc kubenswrapper[5012]: E0219 05:44:39.788006 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": container with ID starting with d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0 not found: ID does not exist" containerID="d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.788044 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0"} err="failed to get container status \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": rpc error: code = NotFound desc = could not find container \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": container with ID starting with d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.788071 5012 scope.go:117] "RemoveContainer" containerID="6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f" Feb 19 05:44:39 crc kubenswrapper[5012]: E0219 05:44:39.788325 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": container with ID starting with 6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f not found: ID does not exist" containerID="6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.788350 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f"} err="failed to get container status \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": rpc error: code = NotFound desc = could not find container \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": container with ID starting with 6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.788364 5012 scope.go:117] "RemoveContainer" containerID="5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3" Feb 19 05:44:39 crc kubenswrapper[5012]: E0219 05:44:39.792959 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": container with ID starting with 5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3 not found: ID does not exist" containerID="5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.793472 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3"} err="failed to get container status \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": rpc error: code = NotFound desc = could not find container \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": container with ID starting with 5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.793490 5012 scope.go:117] "RemoveContainer" containerID="d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.798354 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vj27c"] Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.812803 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2"} err="failed to get container status \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": rpc error: code = NotFound desc = could not find container \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": container with ID starting with d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.812846 5012 scope.go:117] "RemoveContainer" containerID="d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.813558 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0"} err="failed to get container status \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": rpc error: code = NotFound desc = could not find container \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": container with ID starting with d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.813587 5012 scope.go:117] "RemoveContainer" containerID="6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.814426 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f"} err="failed to get container status \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": rpc error: code = NotFound desc = could not find container \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": container with ID starting with 6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.814459 5012 scope.go:117] "RemoveContainer" containerID="5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.814881 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3"} err="failed to get container status \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": rpc error: code = NotFound desc = could not find container \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": container with ID starting with 5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.814894 5012 scope.go:117] "RemoveContainer" containerID="d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.815205 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2"} err="failed to get container status \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": rpc error: code = NotFound desc = could not find container \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": container with ID starting with d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.815222 5012 scope.go:117] "RemoveContainer" containerID="d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.819472 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0"} err="failed to get container status \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": rpc error: code = NotFound desc = could not find container \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": container with ID starting with d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.819507 5012 scope.go:117] "RemoveContainer" containerID="6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.826161 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f"} err="failed to get container status \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": rpc error: code = NotFound desc = could not find container \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": container with ID starting with 6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.826204 5012 scope.go:117] "RemoveContainer" containerID="5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.828743 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3"} err="failed to get container status \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": rpc error: code = NotFound desc = could not find container \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": container with ID starting with 5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.828763 5012 scope.go:117] "RemoveContainer" containerID="d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.830030 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2"} err="failed to get container status \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": rpc error: code = NotFound desc = could not find container \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": container with ID starting with d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.830072 5012 scope.go:117] "RemoveContainer" containerID="d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.830507 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0"} err="failed to get container status \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": rpc error: code = NotFound desc = could not find container \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": container with ID starting with d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.830553 5012 scope.go:117] "RemoveContainer" containerID="6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.831681 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f"} err="failed to get container status \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": rpc error: code = NotFound desc = could not find container \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": container with ID starting with 6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.831720 5012 scope.go:117] "RemoveContainer" containerID="5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.832421 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3"} err="failed to get container status \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": rpc error: code = NotFound desc = could not find container \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": container with ID starting with 5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: W0219 05:44:39.836537 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod768cc9af_66f9_4972_a2b4_a69b0fb15b3d.slice/crio-f1474b2df1783edf7e36b069e0ed66b3d3ea5512e978fe4795cb650c4c998b7c WatchSource:0}: Error finding container f1474b2df1783edf7e36b069e0ed66b3d3ea5512e978fe4795cb650c4c998b7c: Status 404 returned error can't find the container with id f1474b2df1783edf7e36b069e0ed66b3d3ea5512e978fe4795cb650c4c998b7c Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.881785 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-log-httpd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.882080 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-log-httpd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.882167 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.882224 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-config-data\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.882274 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfvd\" (UniqueName: \"kubernetes.io/projected/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-kube-api-access-2wfvd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.882383 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-run-httpd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.882431 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.882541 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-scripts\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.884731 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-run-httpd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.893413 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.894986 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-config-data\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.896373 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-scripts\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.897532 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.906121 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wfvd\" (UniqueName: \"kubernetes.io/projected/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-kube-api-access-2wfvd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.974421 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.027240 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.040321 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b3d3-account-create-update-jv5jh"] Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.048580 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77b847d784-sfqqm"] Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.048964 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-77b847d784-sfqqm" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerName="neutron-api" containerID="cri-o://9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a" gracePeriod=30 Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.049422 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-77b847d784-sfqqm" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerName="neutron-httpd" containerID="cri-o://6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f" gracePeriod=30 Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.189519 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" event={"ID":"0b1a4d80-a736-41c3-9157-c0a696c10eff","Type":"ContainerStarted","Data":"39e7d198d33d90bf562a2f8d83ddbd204e351ae8b0c5bc3ebf6ffd290d67ecd5"} Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.203155 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vj27c" event={"ID":"768cc9af-66f9-4972-a2b4-a69b0fb15b3d","Type":"ContainerStarted","Data":"f1474b2df1783edf7e36b069e0ed66b3d3ea5512e978fe4795cb650c4c998b7c"} Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.203952 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-fc68-account-create-update-tfrzr"] Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.213043 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a4a6-account-create-update-tz4l9" event={"ID":"cd4d5a16-81ab-4336-99d5-570d83e4baaa","Type":"ContainerStarted","Data":"6bb09a0e711ff60e20706cda170848a83267740db4bd8dbf04824ff14d1736e8"} Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.215641 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j7vgh" event={"ID":"2fc398d7-f426-420d-981c-6bda415a2ce0","Type":"ContainerStarted","Data":"43cb426b1d824281e78b0291231050744f408cc09f73ab56e4ae893d291e9f7e"} Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.215679 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j7vgh" event={"ID":"2fc398d7-f426-420d-981c-6bda415a2ce0","Type":"ContainerStarted","Data":"b5f8ebe3d33920b34ec3478c99b9fa3b4b46afe5cb6d104ea5353e6955b7bf88"} Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.230969 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9gsgt"] Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.621290 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-j7vgh" podStartSLOduration=2.6212583240000003 podStartE2EDuration="2.621258324s" podCreationTimestamp="2026-02-19 05:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:40.232562434 +0000 UTC m=+1176.265885003" watchObservedRunningTime="2026-02-19 05:44:40.621258324 +0000 UTC m=+1176.654580893" Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.625406 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:40 crc kubenswrapper[5012]: W0219 05:44:40.640828 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2e6ffe3_5533_459b_989b_e04f94b8f8ba.slice/crio-dd27232efe40574ec6d4be8487ee757105954e66ecc2b8f597ac24a25d2b5f76 WatchSource:0}: Error finding container dd27232efe40574ec6d4be8487ee757105954e66ecc2b8f597ac24a25d2b5f76: Status 404 returned error can't find the container with id dd27232efe40574ec6d4be8487ee757105954e66ecc2b8f597ac24a25d2b5f76 Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.725412 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" path="/var/lib/kubelet/pods/1b39b6f2-c394-449b-9c41-1b09eabce119/volumes" Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.228412 5012 generic.go:334] "Generic (PLEG): container finished" podID="0b1a4d80-a736-41c3-9157-c0a696c10eff" containerID="db7dcb78edaee0fd0bebd3da354f9bac23c709f6cc5c4054736fd0aaea637cae" exitCode=0 Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.228797 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" event={"ID":"0b1a4d80-a736-41c3-9157-c0a696c10eff","Type":"ContainerDied","Data":"db7dcb78edaee0fd0bebd3da354f9bac23c709f6cc5c4054736fd0aaea637cae"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.232545 5012 generic.go:334] "Generic (PLEG): container finished" podID="768cc9af-66f9-4972-a2b4-a69b0fb15b3d" containerID="32216c19b01878e03cf37157a913c36ac04ed37d7c518d5811bfc0096e2fc84b" exitCode=0 Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.232599 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vj27c" event={"ID":"768cc9af-66f9-4972-a2b4-a69b0fb15b3d","Type":"ContainerDied","Data":"32216c19b01878e03cf37157a913c36ac04ed37d7c518d5811bfc0096e2fc84b"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.239065 5012 generic.go:334] "Generic (PLEG): container finished" podID="cd4d5a16-81ab-4336-99d5-570d83e4baaa" containerID="48110d1bed52f125950a67152ee45f991adbabd56a8a45d17e8316bb03423870" exitCode=0 Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.239138 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a4a6-account-create-update-tz4l9" event={"ID":"cd4d5a16-81ab-4336-99d5-570d83e4baaa","Type":"ContainerDied","Data":"48110d1bed52f125950a67152ee45f991adbabd56a8a45d17e8316bb03423870"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.244564 5012 generic.go:334] "Generic (PLEG): container finished" podID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerID="6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f" exitCode=0 Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.244658 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b847d784-sfqqm" event={"ID":"20fc844f-415a-4c39-b2ac-966ff2a43a43","Type":"ContainerDied","Data":"6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.248433 5012 generic.go:334] "Generic (PLEG): container finished" podID="80e98ac0-3018-4566-95b3-2d2dfd3e234e" containerID="2adf806f0d4859a0678f70c1d3e40183b96910ec8d7d4b4dd3e550a8e559d848" exitCode=0 Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.248602 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" event={"ID":"80e98ac0-3018-4566-95b3-2d2dfd3e234e","Type":"ContainerDied","Data":"2adf806f0d4859a0678f70c1d3e40183b96910ec8d7d4b4dd3e550a8e559d848"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.248688 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" event={"ID":"80e98ac0-3018-4566-95b3-2d2dfd3e234e","Type":"ContainerStarted","Data":"fc9e0b36ba215a7b15a043e893474e804ffe85a7b980f7c6bfcf172eebfcba2c"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.250369 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerStarted","Data":"34a399338c013b61152c60fcd0046303ede4ee51c443dfcf2a65805c9c44defe"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.250402 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerStarted","Data":"dd27232efe40574ec6d4be8487ee757105954e66ecc2b8f597ac24a25d2b5f76"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.256555 5012 generic.go:334] "Generic (PLEG): container finished" podID="2fc398d7-f426-420d-981c-6bda415a2ce0" containerID="43cb426b1d824281e78b0291231050744f408cc09f73ab56e4ae893d291e9f7e" exitCode=0 Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.256732 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j7vgh" event={"ID":"2fc398d7-f426-420d-981c-6bda415a2ce0","Type":"ContainerDied","Data":"43cb426b1d824281e78b0291231050744f408cc09f73ab56e4ae893d291e9f7e"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.278416 5012 generic.go:334] "Generic (PLEG): container finished" podID="efae98df-8f23-4e6b-bad0-f2c7a58fb86d" containerID="7b6605dba53e000181057a053dcadb95742b096a45a5fa3c7a87f8e866bb1bd9" exitCode=0 Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.278500 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9gsgt" event={"ID":"efae98df-8f23-4e6b-bad0-f2c7a58fb86d","Type":"ContainerDied","Data":"7b6605dba53e000181057a053dcadb95742b096a45a5fa3c7a87f8e866bb1bd9"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.278532 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9gsgt" event={"ID":"efae98df-8f23-4e6b-bad0-f2c7a58fb86d","Type":"ContainerStarted","Data":"22014db8dc001ca072a53bc19b4abaaf826ac34b43c61bc952f4be5c2e88203d"} Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.291379 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerStarted","Data":"cb200dd76cd661f7ff34b71bfb488f08698c2c8969d0994a64b2d1b69bb789ec"} Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.291794 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerStarted","Data":"4b17f7e35bacf75c95fd5af2ce831c9268ee336939f6e0582d263b98f40338b3"} Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.625912 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.759469 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5svrw\" (UniqueName: \"kubernetes.io/projected/80e98ac0-3018-4566-95b3-2d2dfd3e234e-kube-api-access-5svrw\") pod \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\" (UID: \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\") " Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.759572 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e98ac0-3018-4566-95b3-2d2dfd3e234e-operator-scripts\") pod \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\" (UID: \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\") " Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.765831 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80e98ac0-3018-4566-95b3-2d2dfd3e234e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80e98ac0-3018-4566-95b3-2d2dfd3e234e" (UID: "80e98ac0-3018-4566-95b3-2d2dfd3e234e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.768415 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80e98ac0-3018-4566-95b3-2d2dfd3e234e-kube-api-access-5svrw" (OuterVolumeSpecName: "kube-api-access-5svrw") pod "80e98ac0-3018-4566-95b3-2d2dfd3e234e" (UID: "80e98ac0-3018-4566-95b3-2d2dfd3e234e"). InnerVolumeSpecName "kube-api-access-5svrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.844950 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.862096 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5svrw\" (UniqueName: \"kubernetes.io/projected/80e98ac0-3018-4566-95b3-2d2dfd3e234e-kube-api-access-5svrw\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.862125 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e98ac0-3018-4566-95b3-2d2dfd3e234e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.964438 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwbw6\" (UniqueName: \"kubernetes.io/projected/0b1a4d80-a736-41c3-9157-c0a696c10eff-kube-api-access-bwbw6\") pod \"0b1a4d80-a736-41c3-9157-c0a696c10eff\" (UID: \"0b1a4d80-a736-41c3-9157-c0a696c10eff\") " Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.964885 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b1a4d80-a736-41c3-9157-c0a696c10eff-operator-scripts\") pod \"0b1a4d80-a736-41c3-9157-c0a696c10eff\" (UID: \"0b1a4d80-a736-41c3-9157-c0a696c10eff\") " Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.965961 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b1a4d80-a736-41c3-9157-c0a696c10eff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b1a4d80-a736-41c3-9157-c0a696c10eff" (UID: "0b1a4d80-a736-41c3-9157-c0a696c10eff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.975211 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b1a4d80-a736-41c3-9157-c0a696c10eff-kube-api-access-bwbw6" (OuterVolumeSpecName: "kube-api-access-bwbw6") pod "0b1a4d80-a736-41c3-9157-c0a696c10eff" (UID: "0b1a4d80-a736-41c3-9157-c0a696c10eff"). InnerVolumeSpecName "kube-api-access-bwbw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.048598 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.067644 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwbw6\" (UniqueName: \"kubernetes.io/projected/0b1a4d80-a736-41c3-9157-c0a696c10eff-kube-api-access-bwbw6\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.067675 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b1a4d80-a736-41c3-9157-c0a696c10eff-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.111005 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.115072 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.160352 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.170899 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.171464 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fc398d7-f426-420d-981c-6bda415a2ce0-operator-scripts\") pod \"2fc398d7-f426-420d-981c-6bda415a2ce0\" (UID: \"2fc398d7-f426-420d-981c-6bda415a2ce0\") " Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.171579 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-operator-scripts\") pod \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\" (UID: \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\") " Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.171660 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hs9f\" (UniqueName: \"kubernetes.io/projected/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-kube-api-access-5hs9f\") pod \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\" (UID: \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\") " Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.171688 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsvz7\" (UniqueName: \"kubernetes.io/projected/2fc398d7-f426-420d-981c-6bda415a2ce0-kube-api-access-xsvz7\") pod \"2fc398d7-f426-420d-981c-6bda415a2ce0\" (UID: \"2fc398d7-f426-420d-981c-6bda415a2ce0\") " Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.171746 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd4d5a16-81ab-4336-99d5-570d83e4baaa-operator-scripts\") pod \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\" (UID: \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\") " Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.171770 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wchr2\" (UniqueName: \"kubernetes.io/projected/cd4d5a16-81ab-4336-99d5-570d83e4baaa-kube-api-access-wchr2\") pod \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\" (UID: \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\") " Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.175806 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fc398d7-f426-420d-981c-6bda415a2ce0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2fc398d7-f426-420d-981c-6bda415a2ce0" (UID: "2fc398d7-f426-420d-981c-6bda415a2ce0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.176291 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "efae98df-8f23-4e6b-bad0-f2c7a58fb86d" (UID: "efae98df-8f23-4e6b-bad0-f2c7a58fb86d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.180070 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd4d5a16-81ab-4336-99d5-570d83e4baaa-kube-api-access-wchr2" (OuterVolumeSpecName: "kube-api-access-wchr2") pod "cd4d5a16-81ab-4336-99d5-570d83e4baaa" (UID: "cd4d5a16-81ab-4336-99d5-570d83e4baaa"). InnerVolumeSpecName "kube-api-access-wchr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.185545 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-kube-api-access-5hs9f" (OuterVolumeSpecName: "kube-api-access-5hs9f") pod "efae98df-8f23-4e6b-bad0-f2c7a58fb86d" (UID: "efae98df-8f23-4e6b-bad0-f2c7a58fb86d"). InnerVolumeSpecName "kube-api-access-5hs9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.188680 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc398d7-f426-420d-981c-6bda415a2ce0-kube-api-access-xsvz7" (OuterVolumeSpecName: "kube-api-access-xsvz7") pod "2fc398d7-f426-420d-981c-6bda415a2ce0" (UID: "2fc398d7-f426-420d-981c-6bda415a2ce0"). InnerVolumeSpecName "kube-api-access-xsvz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.192133 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd4d5a16-81ab-4336-99d5-570d83e4baaa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd4d5a16-81ab-4336-99d5-570d83e4baaa" (UID: "cd4d5a16-81ab-4336-99d5-570d83e4baaa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.273202 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hn7v\" (UniqueName: \"kubernetes.io/projected/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-kube-api-access-6hn7v\") pod \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\" (UID: \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\") " Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.273383 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-operator-scripts\") pod \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\" (UID: \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\") " Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.273864 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wchr2\" (UniqueName: \"kubernetes.io/projected/cd4d5a16-81ab-4336-99d5-570d83e4baaa-kube-api-access-wchr2\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.273898 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fc398d7-f426-420d-981c-6bda415a2ce0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.273907 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.273917 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hs9f\" (UniqueName: \"kubernetes.io/projected/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-kube-api-access-5hs9f\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.273925 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsvz7\" (UniqueName: \"kubernetes.io/projected/2fc398d7-f426-420d-981c-6bda415a2ce0-kube-api-access-xsvz7\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.273933 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd4d5a16-81ab-4336-99d5-570d83e4baaa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.274295 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "768cc9af-66f9-4972-a2b4-a69b0fb15b3d" (UID: "768cc9af-66f9-4972-a2b4-a69b0fb15b3d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.276809 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-kube-api-access-6hn7v" (OuterVolumeSpecName: "kube-api-access-6hn7v") pod "768cc9af-66f9-4972-a2b4-a69b0fb15b3d" (UID: "768cc9af-66f9-4972-a2b4-a69b0fb15b3d"). InnerVolumeSpecName "kube-api-access-6hn7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.304345 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a4a6-account-create-update-tz4l9" event={"ID":"cd4d5a16-81ab-4336-99d5-570d83e4baaa","Type":"ContainerDied","Data":"6bb09a0e711ff60e20706cda170848a83267740db4bd8dbf04824ff14d1736e8"} Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.304398 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bb09a0e711ff60e20706cda170848a83267740db4bd8dbf04824ff14d1736e8" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.304455 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.306914 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" event={"ID":"80e98ac0-3018-4566-95b3-2d2dfd3e234e","Type":"ContainerDied","Data":"fc9e0b36ba215a7b15a043e893474e804ffe85a7b980f7c6bfcf172eebfcba2c"} Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.306960 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc9e0b36ba215a7b15a043e893474e804ffe85a7b980f7c6bfcf172eebfcba2c" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.307028 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.321323 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j7vgh" event={"ID":"2fc398d7-f426-420d-981c-6bda415a2ce0","Type":"ContainerDied","Data":"b5f8ebe3d33920b34ec3478c99b9fa3b4b46afe5cb6d104ea5353e6955b7bf88"} Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.321364 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5f8ebe3d33920b34ec3478c99b9fa3b4b46afe5cb6d104ea5353e6955b7bf88" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.321482 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.335768 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.335798 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9gsgt" event={"ID":"efae98df-8f23-4e6b-bad0-f2c7a58fb86d","Type":"ContainerDied","Data":"22014db8dc001ca072a53bc19b4abaaf826ac34b43c61bc952f4be5c2e88203d"} Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.335834 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22014db8dc001ca072a53bc19b4abaaf826ac34b43c61bc952f4be5c2e88203d" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.341479 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" event={"ID":"0b1a4d80-a736-41c3-9157-c0a696c10eff","Type":"ContainerDied","Data":"39e7d198d33d90bf562a2f8d83ddbd204e351ae8b0c5bc3ebf6ffd290d67ecd5"} Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.341523 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e7d198d33d90bf562a2f8d83ddbd204e351ae8b0c5bc3ebf6ffd290d67ecd5" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.341613 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.342769 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.342802 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.360073 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vj27c" event={"ID":"768cc9af-66f9-4972-a2b4-a69b0fb15b3d","Type":"ContainerDied","Data":"f1474b2df1783edf7e36b069e0ed66b3d3ea5512e978fe4795cb650c4c998b7c"} Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.360106 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1474b2df1783edf7e36b069e0ed66b3d3ea5512e978fe4795cb650c4c998b7c" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.360152 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.375772 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hn7v\" (UniqueName: \"kubernetes.io/projected/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-kube-api-access-6hn7v\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.375808 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.405536 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.419438 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.051421 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.093742 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-combined-ca-bundle\") pod \"20fc844f-415a-4c39-b2ac-966ff2a43a43\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.093855 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-config\") pod \"20fc844f-415a-4c39-b2ac-966ff2a43a43\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.093941 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-ovndb-tls-certs\") pod \"20fc844f-415a-4c39-b2ac-966ff2a43a43\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.094085 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-httpd-config\") pod \"20fc844f-415a-4c39-b2ac-966ff2a43a43\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.094166 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm6t4\" (UniqueName: \"kubernetes.io/projected/20fc844f-415a-4c39-b2ac-966ff2a43a43-kube-api-access-cm6t4\") pod \"20fc844f-415a-4c39-b2ac-966ff2a43a43\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.100495 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20fc844f-415a-4c39-b2ac-966ff2a43a43-kube-api-access-cm6t4" (OuterVolumeSpecName: "kube-api-access-cm6t4") pod "20fc844f-415a-4c39-b2ac-966ff2a43a43" (UID: "20fc844f-415a-4c39-b2ac-966ff2a43a43"). InnerVolumeSpecName "kube-api-access-cm6t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.101435 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "20fc844f-415a-4c39-b2ac-966ff2a43a43" (UID: "20fc844f-415a-4c39-b2ac-966ff2a43a43"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.195807 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "20fc844f-415a-4c39-b2ac-966ff2a43a43" (UID: "20fc844f-415a-4c39-b2ac-966ff2a43a43"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.198424 5012 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.198459 5012 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.198469 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm6t4\" (UniqueName: \"kubernetes.io/projected/20fc844f-415a-4c39-b2ac-966ff2a43a43-kube-api-access-cm6t4\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.201410 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20fc844f-415a-4c39-b2ac-966ff2a43a43" (UID: "20fc844f-415a-4c39-b2ac-966ff2a43a43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.214733 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-config" (OuterVolumeSpecName: "config") pod "20fc844f-415a-4c39-b2ac-966ff2a43a43" (UID: "20fc844f-415a-4c39-b2ac-966ff2a43a43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.299416 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.299445 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.371158 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerStarted","Data":"694cb7239194668fdd96877662e230d283d111646e3e233d72ff54fa322e04ce"} Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.371278 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="ceilometer-central-agent" containerID="cri-o://34a399338c013b61152c60fcd0046303ede4ee51c443dfcf2a65805c9c44defe" gracePeriod=30 Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.371339 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.371395 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="proxy-httpd" containerID="cri-o://694cb7239194668fdd96877662e230d283d111646e3e233d72ff54fa322e04ce" gracePeriod=30 Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.371431 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="sg-core" containerID="cri-o://cb200dd76cd661f7ff34b71bfb488f08698c2c8969d0994a64b2d1b69bb789ec" gracePeriod=30 Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.371481 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="ceilometer-notification-agent" containerID="cri-o://4b17f7e35bacf75c95fd5af2ce831c9268ee336939f6e0582d263b98f40338b3" gracePeriod=30 Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.374809 5012 generic.go:334] "Generic (PLEG): container finished" podID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerID="9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a" exitCode=0 Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.375604 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.376404 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b847d784-sfqqm" event={"ID":"20fc844f-415a-4c39-b2ac-966ff2a43a43","Type":"ContainerDied","Data":"9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a"} Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.376511 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b847d784-sfqqm" event={"ID":"20fc844f-415a-4c39-b2ac-966ff2a43a43","Type":"ContainerDied","Data":"05d6404e6cfe0f5924141acac1a5c449939eddf44dc7eb77958158988b1bb5ee"} Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.376614 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.376707 5012 scope.go:117] "RemoveContainer" containerID="6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.376978 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.386026 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.386703 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.396218 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.565078825 podStartE2EDuration="5.396196148s" podCreationTimestamp="2026-02-19 05:44:39 +0000 UTC" firstStartedPulling="2026-02-19 05:44:40.643203949 +0000 UTC m=+1176.676526518" lastFinishedPulling="2026-02-19 05:44:43.474321272 +0000 UTC m=+1179.507643841" observedRunningTime="2026-02-19 05:44:44.391801341 +0000 UTC m=+1180.425123900" watchObservedRunningTime="2026-02-19 05:44:44.396196148 +0000 UTC m=+1180.429518717" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.417394 5012 scope.go:117] "RemoveContainer" containerID="9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.422548 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77b847d784-sfqqm"] Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.430877 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-77b847d784-sfqqm"] Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.436981 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.450385 5012 scope.go:117] "RemoveContainer" containerID="6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f" Feb 19 05:44:44 crc kubenswrapper[5012]: E0219 05:44:44.454371 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f\": container with ID starting with 6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f not found: ID does not exist" containerID="6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.454403 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f"} err="failed to get container status \"6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f\": rpc error: code = NotFound desc = could not find container \"6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f\": container with ID starting with 6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f not found: ID does not exist" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.454420 5012 scope.go:117] "RemoveContainer" containerID="9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a" Feb 19 05:44:44 crc kubenswrapper[5012]: E0219 05:44:44.457361 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a\": container with ID starting with 9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a not found: ID does not exist" containerID="9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.457387 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a"} err="failed to get container status \"9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a\": rpc error: code = NotFound desc = could not find container \"9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a\": container with ID starting with 9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a not found: ID does not exist" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.459856 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.712560 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" path="/var/lib/kubelet/pods/20fc844f-415a-4c39-b2ac-966ff2a43a43/volumes" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.395818 5012 generic.go:334] "Generic (PLEG): container finished" podID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerID="694cb7239194668fdd96877662e230d283d111646e3e233d72ff54fa322e04ce" exitCode=0 Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396127 5012 generic.go:334] "Generic (PLEG): container finished" podID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerID="cb200dd76cd661f7ff34b71bfb488f08698c2c8969d0994a64b2d1b69bb789ec" exitCode=2 Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396136 5012 generic.go:334] "Generic (PLEG): container finished" podID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerID="4b17f7e35bacf75c95fd5af2ce831c9268ee336939f6e0582d263b98f40338b3" exitCode=0 Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396142 5012 generic.go:334] "Generic (PLEG): container finished" podID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerID="34a399338c013b61152c60fcd0046303ede4ee51c443dfcf2a65805c9c44defe" exitCode=0 Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396227 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerDied","Data":"694cb7239194668fdd96877662e230d283d111646e3e233d72ff54fa322e04ce"} Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396283 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerDied","Data":"cb200dd76cd661f7ff34b71bfb488f08698c2c8969d0994a64b2d1b69bb789ec"} Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396294 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerDied","Data":"4b17f7e35bacf75c95fd5af2ce831c9268ee336939f6e0582d263b98f40338b3"} Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396354 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerDied","Data":"34a399338c013b61152c60fcd0046303ede4ee51c443dfcf2a65805c9c44defe"} Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396363 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerDied","Data":"dd27232efe40574ec6d4be8487ee757105954e66ecc2b8f597ac24a25d2b5f76"} Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396373 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd27232efe40574ec6d4be8487ee757105954e66ecc2b8f597ac24a25d2b5f76" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.407545 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.409413 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.419336 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.537082 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wfvd\" (UniqueName: \"kubernetes.io/projected/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-kube-api-access-2wfvd\") pod \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.538088 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-sg-core-conf-yaml\") pod \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.538118 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-config-data\") pod \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.538144 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-run-httpd\") pod \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.538166 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-log-httpd\") pod \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.538210 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-combined-ca-bundle\") pod \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.538262 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-scripts\") pod \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.538401 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f2e6ffe3-5533-459b-989b-e04f94b8f8ba" (UID: "f2e6ffe3-5533-459b-989b-e04f94b8f8ba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.538682 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f2e6ffe3-5533-459b-989b-e04f94b8f8ba" (UID: "f2e6ffe3-5533-459b-989b-e04f94b8f8ba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.539219 5012 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.539258 5012 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.545337 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-kube-api-access-2wfvd" (OuterVolumeSpecName: "kube-api-access-2wfvd") pod "f2e6ffe3-5533-459b-989b-e04f94b8f8ba" (UID: "f2e6ffe3-5533-459b-989b-e04f94b8f8ba"). InnerVolumeSpecName "kube-api-access-2wfvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.548825 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-scripts" (OuterVolumeSpecName: "scripts") pod "f2e6ffe3-5533-459b-989b-e04f94b8f8ba" (UID: "f2e6ffe3-5533-459b-989b-e04f94b8f8ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.584544 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f2e6ffe3-5533-459b-989b-e04f94b8f8ba" (UID: "f2e6ffe3-5533-459b-989b-e04f94b8f8ba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.640645 5012 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.640812 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.640904 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wfvd\" (UniqueName: \"kubernetes.io/projected/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-kube-api-access-2wfvd\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.667522 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2e6ffe3-5533-459b-989b-e04f94b8f8ba" (UID: "f2e6ffe3-5533-459b-989b-e04f94b8f8ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.676691 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-config-data" (OuterVolumeSpecName: "config-data") pod "f2e6ffe3-5533-459b-989b-e04f94b8f8ba" (UID: "f2e6ffe3-5533-459b-989b-e04f94b8f8ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.743354 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.743540 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.385876 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.390906 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.462103 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.507133 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.525624 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.531942 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532400 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80e98ac0-3018-4566-95b3-2d2dfd3e234e" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532614 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="80e98ac0-3018-4566-95b3-2d2dfd3e234e" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532627 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b1a4d80-a736-41c3-9157-c0a696c10eff" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532634 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b1a4d80-a736-41c3-9157-c0a696c10eff" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532651 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="sg-core" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532657 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="sg-core" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532673 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerName="neutron-httpd" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532679 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerName="neutron-httpd" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532687 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="768cc9af-66f9-4972-a2b4-a69b0fb15b3d" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532692 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="768cc9af-66f9-4972-a2b4-a69b0fb15b3d" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532711 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd4d5a16-81ab-4336-99d5-570d83e4baaa" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532716 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd4d5a16-81ab-4336-99d5-570d83e4baaa" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532726 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc398d7-f426-420d-981c-6bda415a2ce0" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532732 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc398d7-f426-420d-981c-6bda415a2ce0" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532744 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efae98df-8f23-4e6b-bad0-f2c7a58fb86d" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532749 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="efae98df-8f23-4e6b-bad0-f2c7a58fb86d" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532758 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="ceilometer-notification-agent" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532764 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="ceilometer-notification-agent" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532775 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="proxy-httpd" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532781 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="proxy-httpd" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532792 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="ceilometer-central-agent" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532800 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="ceilometer-central-agent" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532811 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerName="neutron-api" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532816 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerName="neutron-api" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532985 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="sg-core" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533000 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc398d7-f426-420d-981c-6bda415a2ce0" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533011 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="ceilometer-notification-agent" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533023 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerName="neutron-api" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533033 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="80e98ac0-3018-4566-95b3-2d2dfd3e234e" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533042 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd4d5a16-81ab-4336-99d5-570d83e4baaa" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533050 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerName="neutron-httpd" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533062 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="efae98df-8f23-4e6b-bad0-f2c7a58fb86d" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533070 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b1a4d80-a736-41c3-9157-c0a696c10eff" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533077 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="proxy-httpd" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533087 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="ceilometer-central-agent" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533096 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="768cc9af-66f9-4972-a2b4-a69b0fb15b3d" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.534822 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.537750 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.538026 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.543791 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.660457 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.660729 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-586kp\" (UniqueName: \"kubernetes.io/projected/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-kube-api-access-586kp\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.660781 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-run-httpd\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.660808 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-config-data\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.660826 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-log-httpd\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.660858 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.660898 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-scripts\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.704594 5012 scope.go:117] "RemoveContainer" containerID="3fe096d4e76671ad6ed28d2c1acfd3c50b1ec4a14f0f8ab2ef4419008e64c651" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.720647 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" path="/var/lib/kubelet/pods/f2e6ffe3-5533-459b-989b-e04f94b8f8ba/volumes" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763062 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-config-data\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763106 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-log-httpd\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763147 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763194 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-scripts\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763261 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763280 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-586kp\" (UniqueName: \"kubernetes.io/projected/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-kube-api-access-586kp\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763334 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-run-httpd\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763790 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-run-httpd\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763852 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-log-httpd\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.771209 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.772171 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-scripts\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.783055 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-config-data\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.788420 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.792325 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-586kp\" (UniqueName: \"kubernetes.io/projected/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-kube-api-access-586kp\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.863920 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:47 crc kubenswrapper[5012]: I0219 05:44:47.335501 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:47 crc kubenswrapper[5012]: I0219 05:44:47.336458 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 05:44:47 crc kubenswrapper[5012]: I0219 05:44:47.500687 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerStarted","Data":"8189678acbed8117b25379f69dcbf461a7f4e3e9e52f112862c9f9884dc160dd"} Feb 19 05:44:47 crc kubenswrapper[5012]: I0219 05:44:47.502588 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerStarted","Data":"2a0d41cf4d088f495c93e797c822d094e5bc72e3b75f843179ab4798684437d6"} Feb 19 05:44:47 crc kubenswrapper[5012]: I0219 05:44:47.514793 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:47 crc kubenswrapper[5012]: I0219 05:44:47.514902 5012 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 05:44:47 crc kubenswrapper[5012]: I0219 05:44:47.605766 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:48 crc kubenswrapper[5012]: I0219 05:44:48.522635 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerStarted","Data":"2208308f841fe7a9243f88d9f9187f00d17fea4911bec7a9bd65972f32feba5b"} Feb 19 05:44:48 crc kubenswrapper[5012]: I0219 05:44:48.523160 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerStarted","Data":"da75d7466a0ff406fa2154591bf96210b474445eaf7b118b66950fc5a8bf1d53"} Feb 19 05:44:48 crc kubenswrapper[5012]: I0219 05:44:48.907592 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vz94t"] Feb 19 05:44:48 crc kubenswrapper[5012]: I0219 05:44:48.908782 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:48 crc kubenswrapper[5012]: I0219 05:44:48.910824 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-bjq99" Feb 19 05:44:48 crc kubenswrapper[5012]: I0219 05:44:48.910981 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 05:44:48 crc kubenswrapper[5012]: I0219 05:44:48.911103 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 19 05:44:48 crc kubenswrapper[5012]: I0219 05:44:48.941580 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vz94t"] Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.004667 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-scripts\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.004721 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8kw9\" (UniqueName: \"kubernetes.io/projected/3f256783-305c-4782-81c0-5aed8867b7e3-kube-api-access-j8kw9\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.004837 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.004881 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-config-data\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.106594 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-config-data\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.106935 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-scripts\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.106972 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8kw9\" (UniqueName: \"kubernetes.io/projected/3f256783-305c-4782-81c0-5aed8867b7e3-kube-api-access-j8kw9\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.107072 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.110558 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.111049 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-scripts\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.113405 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-config-data\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.127752 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8kw9\" (UniqueName: \"kubernetes.io/projected/3f256783-305c-4782-81c0-5aed8867b7e3-kube-api-access-j8kw9\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.237192 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.549817 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerStarted","Data":"7dc207a7d7ed60a60ea64cf89b3f19074f008a063f98dae3bad25f208ef9a023"} Feb 19 05:44:49 crc kubenswrapper[5012]: W0219 05:44:49.871432 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f256783_305c_4782_81c0_5aed8867b7e3.slice/crio-febd118a98b60967b90a09d8286afcce4d62eeec224ba263a32dcf0170bba3da WatchSource:0}: Error finding container febd118a98b60967b90a09d8286afcce4d62eeec224ba263a32dcf0170bba3da: Status 404 returned error can't find the container with id febd118a98b60967b90a09d8286afcce4d62eeec224ba263a32dcf0170bba3da Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.875041 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vz94t"] Feb 19 05:44:50 crc kubenswrapper[5012]: I0219 05:44:50.565546 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerStarted","Data":"5f9e5835bb21eb902f863e74d75272487945828c4edd9c9a4570e94352117eb1"} Feb 19 05:44:50 crc kubenswrapper[5012]: I0219 05:44:50.567501 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 05:44:50 crc kubenswrapper[5012]: I0219 05:44:50.571170 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vz94t" event={"ID":"3f256783-305c-4782-81c0-5aed8867b7e3","Type":"ContainerStarted","Data":"febd118a98b60967b90a09d8286afcce4d62eeec224ba263a32dcf0170bba3da"} Feb 19 05:44:50 crc kubenswrapper[5012]: I0219 05:44:50.599200 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.029786951 podStartE2EDuration="4.599182565s" podCreationTimestamp="2026-02-19 05:44:46 +0000 UTC" firstStartedPulling="2026-02-19 05:44:47.336253312 +0000 UTC m=+1183.369575881" lastFinishedPulling="2026-02-19 05:44:49.905648926 +0000 UTC m=+1185.938971495" observedRunningTime="2026-02-19 05:44:50.590800211 +0000 UTC m=+1186.624122790" watchObservedRunningTime="2026-02-19 05:44:50.599182565 +0000 UTC m=+1186.632505144" Feb 19 05:44:51 crc kubenswrapper[5012]: I0219 05:44:51.808787 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:51 crc kubenswrapper[5012]: I0219 05:44:51.809201 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:51 crc kubenswrapper[5012]: I0219 05:44:51.860244 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:52 crc kubenswrapper[5012]: I0219 05:44:52.621282 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:52 crc kubenswrapper[5012]: I0219 05:44:52.663788 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:44:54 crc kubenswrapper[5012]: I0219 05:44:54.603414 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" containerID="cri-o://2a0d41cf4d088f495c93e797c822d094e5bc72e3b75f843179ab4798684437d6" gracePeriod=30 Feb 19 05:44:55 crc kubenswrapper[5012]: I0219 05:44:55.678629 5012 generic.go:334] "Generic (PLEG): container finished" podID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerID="2a0d41cf4d088f495c93e797c822d094e5bc72e3b75f843179ab4798684437d6" exitCode=0 Feb 19 05:44:55 crc kubenswrapper[5012]: I0219 05:44:55.678711 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerDied","Data":"2a0d41cf4d088f495c93e797c822d094e5bc72e3b75f843179ab4798684437d6"} Feb 19 05:44:55 crc kubenswrapper[5012]: I0219 05:44:55.678925 5012 scope.go:117] "RemoveContainer" containerID="3fe096d4e76671ad6ed28d2c1acfd3c50b1ec4a14f0f8ab2ef4419008e64c651" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.253774 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.389253 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdaa495-6cde-409a-871a-e334ca3f2a91-logs\") pod \"7fdaa495-6cde-409a-871a-e334ca3f2a91\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.389376 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpx9g\" (UniqueName: \"kubernetes.io/projected/7fdaa495-6cde-409a-871a-e334ca3f2a91-kube-api-access-cpx9g\") pod \"7fdaa495-6cde-409a-871a-e334ca3f2a91\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.389482 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-combined-ca-bundle\") pod \"7fdaa495-6cde-409a-871a-e334ca3f2a91\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.389543 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-config-data\") pod \"7fdaa495-6cde-409a-871a-e334ca3f2a91\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.389615 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-custom-prometheus-ca\") pod \"7fdaa495-6cde-409a-871a-e334ca3f2a91\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.389939 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fdaa495-6cde-409a-871a-e334ca3f2a91-logs" (OuterVolumeSpecName: "logs") pod "7fdaa495-6cde-409a-871a-e334ca3f2a91" (UID: "7fdaa495-6cde-409a-871a-e334ca3f2a91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.424761 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fdaa495-6cde-409a-871a-e334ca3f2a91-kube-api-access-cpx9g" (OuterVolumeSpecName: "kube-api-access-cpx9g") pod "7fdaa495-6cde-409a-871a-e334ca3f2a91" (UID: "7fdaa495-6cde-409a-871a-e334ca3f2a91"). InnerVolumeSpecName "kube-api-access-cpx9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.431549 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fdaa495-6cde-409a-871a-e334ca3f2a91" (UID: "7fdaa495-6cde-409a-871a-e334ca3f2a91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.434426 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "7fdaa495-6cde-409a-871a-e334ca3f2a91" (UID: "7fdaa495-6cde-409a-871a-e334ca3f2a91"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.461587 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-config-data" (OuterVolumeSpecName: "config-data") pod "7fdaa495-6cde-409a-871a-e334ca3f2a91" (UID: "7fdaa495-6cde-409a-871a-e334ca3f2a91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.492127 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.492341 5012 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.492423 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdaa495-6cde-409a-871a-e334ca3f2a91-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.492498 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpx9g\" (UniqueName: \"kubernetes.io/projected/7fdaa495-6cde-409a-871a-e334ca3f2a91-kube-api-access-cpx9g\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.492565 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.691812 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerDied","Data":"27cdd4f4a5ee55d08e9db9c6e3380ff5674b5137557956c3e1a7be05a457c3b6"} Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.692527 5012 scope.go:117] "RemoveContainer" containerID="2a0d41cf4d088f495c93e797c822d094e5bc72e3b75f843179ab4798684437d6" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.692765 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.738712 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.759104 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.775337 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:44:56 crc kubenswrapper[5012]: E0219 05:44:56.776081 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.776111 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:44:56 crc kubenswrapper[5012]: E0219 05:44:56.776128 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.776139 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:44:56 crc kubenswrapper[5012]: E0219 05:44:56.776156 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.776164 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.776463 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.776485 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.777477 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.785133 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.790025 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.903295 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.903615 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.903899 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.904187 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f87036fc-fa94-4038-8b65-bb85d8ff6f10-logs\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.904407 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn6g8\" (UniqueName: \"kubernetes.io/projected/f87036fc-fa94-4038-8b65-bb85d8ff6f10-kube-api-access-xn6g8\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.017346 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f87036fc-fa94-4038-8b65-bb85d8ff6f10-logs\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.017507 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn6g8\" (UniqueName: \"kubernetes.io/projected/f87036fc-fa94-4038-8b65-bb85d8ff6f10-kube-api-access-xn6g8\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.017662 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.017731 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.017808 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f87036fc-fa94-4038-8b65-bb85d8ff6f10-logs\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.017859 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.026231 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.031053 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.037125 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.043893 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn6g8\" (UniqueName: \"kubernetes.io/projected/f87036fc-fa94-4038-8b65-bb85d8ff6f10-kube-api-access-xn6g8\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.123343 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.147692 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.150617 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="ceilometer-central-agent" containerID="cri-o://da75d7466a0ff406fa2154591bf96210b474445eaf7b118b66950fc5a8bf1d53" gracePeriod=30 Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.150709 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="proxy-httpd" containerID="cri-o://5f9e5835bb21eb902f863e74d75272487945828c4edd9c9a4570e94352117eb1" gracePeriod=30 Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.150771 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="ceilometer-notification-agent" containerID="cri-o://2208308f841fe7a9243f88d9f9187f00d17fea4911bec7a9bd65972f32feba5b" gracePeriod=30 Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.150726 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="sg-core" containerID="cri-o://7dc207a7d7ed60a60ea64cf89b3f19074f008a063f98dae3bad25f208ef9a023" gracePeriod=30 Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.684822 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.705231 5012 generic.go:334] "Generic (PLEG): container finished" podID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerID="5f9e5835bb21eb902f863e74d75272487945828c4edd9c9a4570e94352117eb1" exitCode=0 Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.705276 5012 generic.go:334] "Generic (PLEG): container finished" podID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerID="7dc207a7d7ed60a60ea64cf89b3f19074f008a063f98dae3bad25f208ef9a023" exitCode=2 Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.705332 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerDied","Data":"5f9e5835bb21eb902f863e74d75272487945828c4edd9c9a4570e94352117eb1"} Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.705422 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerDied","Data":"7dc207a7d7ed60a60ea64cf89b3f19074f008a063f98dae3bad25f208ef9a023"} Feb 19 05:44:58 crc kubenswrapper[5012]: I0219 05:44:58.713723 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" path="/var/lib/kubelet/pods/7fdaa495-6cde-409a-871a-e334ca3f2a91/volumes" Feb 19 05:44:59 crc kubenswrapper[5012]: I0219 05:44:59.729226 5012 generic.go:334] "Generic (PLEG): container finished" podID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerID="2208308f841fe7a9243f88d9f9187f00d17fea4911bec7a9bd65972f32feba5b" exitCode=0 Feb 19 05:44:59 crc kubenswrapper[5012]: I0219 05:44:59.729271 5012 generic.go:334] "Generic (PLEG): container finished" podID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerID="da75d7466a0ff406fa2154591bf96210b474445eaf7b118b66950fc5a8bf1d53" exitCode=0 Feb 19 05:44:59 crc kubenswrapper[5012]: I0219 05:44:59.729329 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerDied","Data":"2208308f841fe7a9243f88d9f9187f00d17fea4911bec7a9bd65972f32feba5b"} Feb 19 05:44:59 crc kubenswrapper[5012]: I0219 05:44:59.729369 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerDied","Data":"da75d7466a0ff406fa2154591bf96210b474445eaf7b118b66950fc5a8bf1d53"} Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.142916 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v"] Feb 19 05:45:00 crc kubenswrapper[5012]: E0219 05:45:00.143621 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.143639 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.143807 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.144493 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.146583 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.150285 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.156830 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v"] Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.282981 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vnh\" (UniqueName: \"kubernetes.io/projected/46070367-1765-4a70-b997-58b87ee1fbf1-kube-api-access-g9vnh\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.283045 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46070367-1765-4a70-b997-58b87ee1fbf1-config-volume\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.283133 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46070367-1765-4a70-b997-58b87ee1fbf1-secret-volume\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.384967 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9vnh\" (UniqueName: \"kubernetes.io/projected/46070367-1765-4a70-b997-58b87ee1fbf1-kube-api-access-g9vnh\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.385080 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46070367-1765-4a70-b997-58b87ee1fbf1-config-volume\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.385147 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46070367-1765-4a70-b997-58b87ee1fbf1-secret-volume\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.386034 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46070367-1765-4a70-b997-58b87ee1fbf1-config-volume\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.400872 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46070367-1765-4a70-b997-58b87ee1fbf1-secret-volume\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.401089 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9vnh\" (UniqueName: \"kubernetes.io/projected/46070367-1765-4a70-b997-58b87ee1fbf1-kube-api-access-g9vnh\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.467637 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.622897 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.736415 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-log-httpd\") pod \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.737117 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3420a7c2-fc4c-4491-bddd-64a534d6f3cd" (UID: "3420a7c2-fc4c-4491-bddd-64a534d6f3cd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.737163 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-586kp\" (UniqueName: \"kubernetes.io/projected/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-kube-api-access-586kp\") pod \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.737200 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-run-httpd\") pod \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.737380 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-scripts\") pod \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.737566 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-combined-ca-bundle\") pod \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.737648 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-config-data\") pod \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.737816 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-sg-core-conf-yaml\") pod \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.738550 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3420a7c2-fc4c-4491-bddd-64a534d6f3cd" (UID: "3420a7c2-fc4c-4491-bddd-64a534d6f3cd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.738678 5012 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.743956 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-scripts" (OuterVolumeSpecName: "scripts") pod "3420a7c2-fc4c-4491-bddd-64a534d6f3cd" (UID: "3420a7c2-fc4c-4491-bddd-64a534d6f3cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.751471 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-kube-api-access-586kp" (OuterVolumeSpecName: "kube-api-access-586kp") pod "3420a7c2-fc4c-4491-bddd-64a534d6f3cd" (UID: "3420a7c2-fc4c-4491-bddd-64a534d6f3cd"). InnerVolumeSpecName "kube-api-access-586kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.772728 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerDied","Data":"8189678acbed8117b25379f69dcbf461a7f4e3e9e52f112862c9f9884dc160dd"} Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.772738 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.772773 5012 scope.go:117] "RemoveContainer" containerID="5f9e5835bb21eb902f863e74d75272487945828c4edd9c9a4570e94352117eb1" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.781128 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f87036fc-fa94-4038-8b65-bb85d8ff6f10","Type":"ContainerStarted","Data":"336e58c7b0dd00f44a9184d19ccf8426b738f727c252218560f8195d3a7f320e"} Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.781178 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f87036fc-fa94-4038-8b65-bb85d8ff6f10","Type":"ContainerStarted","Data":"391ee38ba53c8e8e76588386f77305034c1cba17e5da69d1e81492ce766e692b"} Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.808081 5012 scope.go:117] "RemoveContainer" containerID="7dc207a7d7ed60a60ea64cf89b3f19074f008a063f98dae3bad25f208ef9a023" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.812489 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3420a7c2-fc4c-4491-bddd-64a534d6f3cd" (UID: "3420a7c2-fc4c-4491-bddd-64a534d6f3cd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.817491 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-vz94t" podStartSLOduration=2.146902028 podStartE2EDuration="14.817474171s" podCreationTimestamp="2026-02-19 05:44:48 +0000 UTC" firstStartedPulling="2026-02-19 05:44:49.873270548 +0000 UTC m=+1185.906593117" lastFinishedPulling="2026-02-19 05:45:02.543842651 +0000 UTC m=+1198.577165260" observedRunningTime="2026-02-19 05:45:02.798730215 +0000 UTC m=+1198.832052794" watchObservedRunningTime="2026-02-19 05:45:02.817474171 +0000 UTC m=+1198.850796740" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.822645 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3420a7c2-fc4c-4491-bddd-64a534d6f3cd" (UID: "3420a7c2-fc4c-4491-bddd-64a534d6f3cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.823115 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=6.823101498 podStartE2EDuration="6.823101498s" podCreationTimestamp="2026-02-19 05:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:02.813094124 +0000 UTC m=+1198.846416693" watchObservedRunningTime="2026-02-19 05:45:02.823101498 +0000 UTC m=+1198.856424067" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.831690 5012 scope.go:117] "RemoveContainer" containerID="2208308f841fe7a9243f88d9f9187f00d17fea4911bec7a9bd65972f32feba5b" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.841004 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.841030 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.841040 5012 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.841049 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-586kp\" (UniqueName: \"kubernetes.io/projected/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-kube-api-access-586kp\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.841057 5012 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.848430 5012 scope.go:117] "RemoveContainer" containerID="da75d7466a0ff406fa2154591bf96210b474445eaf7b118b66950fc5a8bf1d53" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.866373 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-config-data" (OuterVolumeSpecName: "config-data") pod "3420a7c2-fc4c-4491-bddd-64a534d6f3cd" (UID: "3420a7c2-fc4c-4491-bddd-64a534d6f3cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.943371 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.989095 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v"] Feb 19 05:45:02 crc kubenswrapper[5012]: W0219 05:45:02.991806 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46070367_1765_4a70_b997_58b87ee1fbf1.slice/crio-522859e776d29559dca0cff24a5d061eefa65ae60bc6b5fba45156ed6ac0e8f1 WatchSource:0}: Error finding container 522859e776d29559dca0cff24a5d061eefa65ae60bc6b5fba45156ed6ac0e8f1: Status 404 returned error can't find the container with id 522859e776d29559dca0cff24a5d061eefa65ae60bc6b5fba45156ed6ac0e8f1 Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.211855 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.229395 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.260422 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:03 crc kubenswrapper[5012]: E0219 05:45:03.261139 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="ceilometer-notification-agent" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261156 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="ceilometer-notification-agent" Feb 19 05:45:03 crc kubenswrapper[5012]: E0219 05:45:03.261174 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="sg-core" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261181 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="sg-core" Feb 19 05:45:03 crc kubenswrapper[5012]: E0219 05:45:03.261206 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="proxy-httpd" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261228 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="proxy-httpd" Feb 19 05:45:03 crc kubenswrapper[5012]: E0219 05:45:03.261242 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="ceilometer-central-agent" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261249 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="ceilometer-central-agent" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261451 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="proxy-httpd" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261465 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="ceilometer-notification-agent" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261477 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="ceilometer-central-agent" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261495 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="sg-core" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261506 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.263136 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.268793 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.269699 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.278827 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.352200 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-run-httpd\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.352464 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.352531 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-log-httpd\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.355488 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-scripts\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.355673 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-config-data\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.355828 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.355912 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2zr6\" (UniqueName: \"kubernetes.io/projected/01803024-8b09-46a8-849a-7129e5734fc5-kube-api-access-v2zr6\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.457726 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-config-data\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.457822 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.457854 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2zr6\" (UniqueName: \"kubernetes.io/projected/01803024-8b09-46a8-849a-7129e5734fc5-kube-api-access-v2zr6\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.457934 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-run-httpd\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.457964 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.458011 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-log-httpd\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.458050 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-scripts\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.458994 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-log-httpd\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.459330 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-run-httpd\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.463635 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.465734 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.466569 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-config-data\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.466594 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-scripts\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.483218 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2zr6\" (UniqueName: \"kubernetes.io/projected/01803024-8b09-46a8-849a-7129e5734fc5-kube-api-access-v2zr6\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.582755 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.807066 5012 generic.go:334] "Generic (PLEG): container finished" podID="46070367-1765-4a70-b997-58b87ee1fbf1" containerID="ef8d233d5ce4a4673c65e084ba6deb20a57df07604ba44e351882efa60733381" exitCode=0 Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.807557 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" event={"ID":"46070367-1765-4a70-b997-58b87ee1fbf1","Type":"ContainerDied","Data":"ef8d233d5ce4a4673c65e084ba6deb20a57df07604ba44e351882efa60733381"} Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.807709 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" event={"ID":"46070367-1765-4a70-b997-58b87ee1fbf1","Type":"ContainerStarted","Data":"522859e776d29559dca0cff24a5d061eefa65ae60bc6b5fba45156ed6ac0e8f1"} Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.815195 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vz94t" event={"ID":"3f256783-305c-4782-81c0-5aed8867b7e3","Type":"ContainerStarted","Data":"b0ed53407a3cb3810cc4f0ec6ea8d71443cb0203ae2152d5e770b7f505f82370"} Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.900366 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:04 crc kubenswrapper[5012]: I0219 05:45:04.735181 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" path="/var/lib/kubelet/pods/3420a7c2-fc4c-4491-bddd-64a534d6f3cd/volumes" Feb 19 05:45:04 crc kubenswrapper[5012]: I0219 05:45:04.829487 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerStarted","Data":"9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5"} Feb 19 05:45:04 crc kubenswrapper[5012]: I0219 05:45:04.829532 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerStarted","Data":"1a4c3e21ec02a97624b92d231eadc367c369bbe32cd7bae830f477cfab60fbad"} Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.209352 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.313417 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46070367-1765-4a70-b997-58b87ee1fbf1-config-volume\") pod \"46070367-1765-4a70-b997-58b87ee1fbf1\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.313560 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9vnh\" (UniqueName: \"kubernetes.io/projected/46070367-1765-4a70-b997-58b87ee1fbf1-kube-api-access-g9vnh\") pod \"46070367-1765-4a70-b997-58b87ee1fbf1\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.313730 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46070367-1765-4a70-b997-58b87ee1fbf1-secret-volume\") pod \"46070367-1765-4a70-b997-58b87ee1fbf1\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.314639 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46070367-1765-4a70-b997-58b87ee1fbf1-config-volume" (OuterVolumeSpecName: "config-volume") pod "46070367-1765-4a70-b997-58b87ee1fbf1" (UID: "46070367-1765-4a70-b997-58b87ee1fbf1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.323460 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46070367-1765-4a70-b997-58b87ee1fbf1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "46070367-1765-4a70-b997-58b87ee1fbf1" (UID: "46070367-1765-4a70-b997-58b87ee1fbf1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.323488 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46070367-1765-4a70-b997-58b87ee1fbf1-kube-api-access-g9vnh" (OuterVolumeSpecName: "kube-api-access-g9vnh") pod "46070367-1765-4a70-b997-58b87ee1fbf1" (UID: "46070367-1765-4a70-b997-58b87ee1fbf1"). InnerVolumeSpecName "kube-api-access-g9vnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.415932 5012 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46070367-1765-4a70-b997-58b87ee1fbf1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.415970 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46070367-1765-4a70-b997-58b87ee1fbf1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.415988 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9vnh\" (UniqueName: \"kubernetes.io/projected/46070367-1765-4a70-b997-58b87ee1fbf1-kube-api-access-g9vnh\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.844148 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.844629 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" event={"ID":"46070367-1765-4a70-b997-58b87ee1fbf1","Type":"ContainerDied","Data":"522859e776d29559dca0cff24a5d061eefa65ae60bc6b5fba45156ed6ac0e8f1"} Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.844671 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="522859e776d29559dca0cff24a5d061eefa65ae60bc6b5fba45156ed6ac0e8f1" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.846521 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerStarted","Data":"2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832"} Feb 19 05:45:06 crc kubenswrapper[5012]: I0219 05:45:06.866529 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerStarted","Data":"e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b"} Feb 19 05:45:07 crc kubenswrapper[5012]: I0219 05:45:07.123917 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:45:07 crc kubenswrapper[5012]: I0219 05:45:07.186319 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 19 05:45:07 crc kubenswrapper[5012]: I0219 05:45:07.882852 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerStarted","Data":"f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5"} Feb 19 05:45:07 crc kubenswrapper[5012]: I0219 05:45:07.883043 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 19 05:45:07 crc kubenswrapper[5012]: I0219 05:45:07.917566 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9006012490000002 podStartE2EDuration="4.917540305s" podCreationTimestamp="2026-02-19 05:45:03 +0000 UTC" firstStartedPulling="2026-02-19 05:45:03.933259976 +0000 UTC m=+1199.966582545" lastFinishedPulling="2026-02-19 05:45:06.950199032 +0000 UTC m=+1202.983521601" observedRunningTime="2026-02-19 05:45:07.907787257 +0000 UTC m=+1203.941109826" watchObservedRunningTime="2026-02-19 05:45:07.917540305 +0000 UTC m=+1203.950862894" Feb 19 05:45:07 crc kubenswrapper[5012]: I0219 05:45:07.942393 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 19 05:45:08 crc kubenswrapper[5012]: I0219 05:45:08.895509 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 05:45:17 crc kubenswrapper[5012]: I0219 05:45:17.004836 5012 generic.go:334] "Generic (PLEG): container finished" podID="3f256783-305c-4782-81c0-5aed8867b7e3" containerID="b0ed53407a3cb3810cc4f0ec6ea8d71443cb0203ae2152d5e770b7f505f82370" exitCode=0 Feb 19 05:45:17 crc kubenswrapper[5012]: I0219 05:45:17.004910 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vz94t" event={"ID":"3f256783-305c-4782-81c0-5aed8867b7e3","Type":"ContainerDied","Data":"b0ed53407a3cb3810cc4f0ec6ea8d71443cb0203ae2152d5e770b7f505f82370"} Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.474452 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.525875 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-scripts\") pod \"3f256783-305c-4782-81c0-5aed8867b7e3\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.526045 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8kw9\" (UniqueName: \"kubernetes.io/projected/3f256783-305c-4782-81c0-5aed8867b7e3-kube-api-access-j8kw9\") pod \"3f256783-305c-4782-81c0-5aed8867b7e3\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.526079 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-combined-ca-bundle\") pod \"3f256783-305c-4782-81c0-5aed8867b7e3\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.526120 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-config-data\") pod \"3f256783-305c-4782-81c0-5aed8867b7e3\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.535297 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f256783-305c-4782-81c0-5aed8867b7e3-kube-api-access-j8kw9" (OuterVolumeSpecName: "kube-api-access-j8kw9") pod "3f256783-305c-4782-81c0-5aed8867b7e3" (UID: "3f256783-305c-4782-81c0-5aed8867b7e3"). InnerVolumeSpecName "kube-api-access-j8kw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.536016 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-scripts" (OuterVolumeSpecName: "scripts") pod "3f256783-305c-4782-81c0-5aed8867b7e3" (UID: "3f256783-305c-4782-81c0-5aed8867b7e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.579661 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-config-data" (OuterVolumeSpecName: "config-data") pod "3f256783-305c-4782-81c0-5aed8867b7e3" (UID: "3f256783-305c-4782-81c0-5aed8867b7e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.584441 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f256783-305c-4782-81c0-5aed8867b7e3" (UID: "3f256783-305c-4782-81c0-5aed8867b7e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.629527 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.629560 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8kw9\" (UniqueName: \"kubernetes.io/projected/3f256783-305c-4782-81c0-5aed8867b7e3-kube-api-access-j8kw9\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.629575 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.629588 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.034632 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vz94t" event={"ID":"3f256783-305c-4782-81c0-5aed8867b7e3","Type":"ContainerDied","Data":"febd118a98b60967b90a09d8286afcce4d62eeec224ba263a32dcf0170bba3da"} Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.034707 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.034719 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="febd118a98b60967b90a09d8286afcce4d62eeec224ba263a32dcf0170bba3da" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.213938 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 05:45:19 crc kubenswrapper[5012]: E0219 05:45:19.214669 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46070367-1765-4a70-b997-58b87ee1fbf1" containerName="collect-profiles" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.214701 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="46070367-1765-4a70-b997-58b87ee1fbf1" containerName="collect-profiles" Feb 19 05:45:19 crc kubenswrapper[5012]: E0219 05:45:19.214722 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f256783-305c-4782-81c0-5aed8867b7e3" containerName="nova-cell0-conductor-db-sync" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.214736 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f256783-305c-4782-81c0-5aed8867b7e3" containerName="nova-cell0-conductor-db-sync" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.215092 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="46070367-1765-4a70-b997-58b87ee1fbf1" containerName="collect-profiles" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.215136 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f256783-305c-4782-81c0-5aed8867b7e3" containerName="nova-cell0-conductor-db-sync" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.216465 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.220110 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-bjq99" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.220824 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.227835 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.247336 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4558m\" (UniqueName: \"kubernetes.io/projected/6852caab-c1b6-40cd-b5df-88d22f6016bd-kube-api-access-4558m\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.247425 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6852caab-c1b6-40cd-b5df-88d22f6016bd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.247481 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6852caab-c1b6-40cd-b5df-88d22f6016bd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.350679 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6852caab-c1b6-40cd-b5df-88d22f6016bd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.351002 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4558m\" (UniqueName: \"kubernetes.io/projected/6852caab-c1b6-40cd-b5df-88d22f6016bd-kube-api-access-4558m\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.351103 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6852caab-c1b6-40cd-b5df-88d22f6016bd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.355892 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6852caab-c1b6-40cd-b5df-88d22f6016bd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.356779 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6852caab-c1b6-40cd-b5df-88d22f6016bd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.379535 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4558m\" (UniqueName: \"kubernetes.io/projected/6852caab-c1b6-40cd-b5df-88d22f6016bd-kube-api-access-4558m\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.564171 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.916195 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 05:45:20 crc kubenswrapper[5012]: I0219 05:45:20.051769 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6852caab-c1b6-40cd-b5df-88d22f6016bd","Type":"ContainerStarted","Data":"8632a068f01cba262dbe94641df1a4dc199f5d9de4a76d5d019edc0991514ad1"} Feb 19 05:45:21 crc kubenswrapper[5012]: I0219 05:45:21.069989 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6852caab-c1b6-40cd-b5df-88d22f6016bd","Type":"ContainerStarted","Data":"b0916e8a409d5228426e51fc0080affcdf4fe2e92265e00325e20518ef1ee8d7"} Feb 19 05:45:21 crc kubenswrapper[5012]: I0219 05:45:21.071693 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:21 crc kubenswrapper[5012]: I0219 05:45:21.098748 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.098724335 podStartE2EDuration="2.098724335s" podCreationTimestamp="2026-02-19 05:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:21.091733365 +0000 UTC m=+1217.125055974" watchObservedRunningTime="2026-02-19 05:45:21.098724335 +0000 UTC m=+1217.132046904" Feb 19 05:45:29 crc kubenswrapper[5012]: I0219 05:45:29.615829 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.261529 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-nr45z"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.267812 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.276411 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.285248 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nr45z"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.292999 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.302906 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.303096 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7wrx\" (UniqueName: \"kubernetes.io/projected/70ce9757-cdf1-4864-95ad-9d25fb9830a9-kube-api-access-m7wrx\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.303221 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-scripts\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.303326 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-config-data\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.406804 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-scripts\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.406871 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-config-data\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.406917 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.407019 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7wrx\" (UniqueName: \"kubernetes.io/projected/70ce9757-cdf1-4864-95ad-9d25fb9830a9-kube-api-access-m7wrx\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.415083 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-scripts\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.417711 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.427368 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.429749 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.435758 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.438259 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-config-data\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.445721 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.451328 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7wrx\" (UniqueName: \"kubernetes.io/projected/70ce9757-cdf1-4864-95ad-9d25fb9830a9-kube-api-access-m7wrx\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.509439 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c5be03-d36f-4a6a-8359-535ed4ad505d-logs\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.509576 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.509618 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-config-data\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.509698 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzdm8\" (UniqueName: \"kubernetes.io/projected/b9c5be03-d36f-4a6a-8359-535ed4ad505d-kube-api-access-rzdm8\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.549414 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.551352 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.556167 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.577565 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.600008 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.627185 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzdm8\" (UniqueName: \"kubernetes.io/projected/b9c5be03-d36f-4a6a-8359-535ed4ad505d-kube-api-access-rzdm8\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.627262 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c5be03-d36f-4a6a-8359-535ed4ad505d-logs\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.627352 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.627385 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-config-data\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.629711 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c5be03-d36f-4a6a-8359-535ed4ad505d-logs\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.638668 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-config-data\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.649162 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.660169 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzdm8\" (UniqueName: \"kubernetes.io/projected/b9c5be03-d36f-4a6a-8359-535ed4ad505d-kube-api-access-rzdm8\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.732275 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de786391-8b45-4a24-9c56-2d4c86d5cfba-logs\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.732346 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.732405 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzrnl\" (UniqueName: \"kubernetes.io/projected/de786391-8b45-4a24-9c56-2d4c86d5cfba-kube-api-access-lzrnl\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.732475 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-config-data\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.744415 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647496cc8f-4z5vx"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.745831 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.747078 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.755338 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.765872 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.767528 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647496cc8f-4z5vx"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.794998 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.807973 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.809966 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.818067 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838662 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838731 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfxlj\" (UniqueName: \"kubernetes.io/projected/c1589f54-6631-4004-b2a9-e253b43b0644-kube-api-access-mfxlj\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838760 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-svc\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838798 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-swift-storage-0\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838819 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838859 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzcfh\" (UniqueName: \"kubernetes.io/projected/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-kube-api-access-kzcfh\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838880 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de786391-8b45-4a24-9c56-2d4c86d5cfba-logs\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838901 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838967 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzrnl\" (UniqueName: \"kubernetes.io/projected/de786391-8b45-4a24-9c56-2d4c86d5cfba-kube-api-access-lzrnl\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.839019 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-sb\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.839039 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-nb\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.839081 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-config-data\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.839097 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-config\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.840657 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de786391-8b45-4a24-9c56-2d4c86d5cfba-logs\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.850413 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.852722 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.861482 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.863154 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-config-data\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.911049 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzrnl\" (UniqueName: \"kubernetes.io/projected/de786391-8b45-4a24-9c56-2d4c86d5cfba-kube-api-access-lzrnl\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941252 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-sb\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941338 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-nb\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941377 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj8q6\" (UniqueName: \"kubernetes.io/projected/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-kube-api-access-fj8q6\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941413 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-config-data\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941435 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-config\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941463 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941494 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfxlj\" (UniqueName: \"kubernetes.io/projected/c1589f54-6631-4004-b2a9-e253b43b0644-kube-api-access-mfxlj\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941521 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-svc\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941561 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-swift-storage-0\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941588 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941611 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941646 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzcfh\" (UniqueName: \"kubernetes.io/projected/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-kube-api-access-kzcfh\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.943521 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-sb\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.944082 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-nb\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.944657 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-config\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.962067 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.957110 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-swift-storage-0\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.964455 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-svc\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.980406 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzcfh\" (UniqueName: \"kubernetes.io/projected/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-kube-api-access-kzcfh\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.985001 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.022378 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfxlj\" (UniqueName: \"kubernetes.io/projected/c1589f54-6631-4004-b2a9-e253b43b0644-kube-api-access-mfxlj\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.044421 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj8q6\" (UniqueName: \"kubernetes.io/projected/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-kube-api-access-fj8q6\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.044481 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-config-data\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.044567 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.057108 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-config-data\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.058066 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.069552 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj8q6\" (UniqueName: \"kubernetes.io/projected/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-kube-api-access-fj8q6\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.101118 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.110352 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.166728 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.207531 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.440434 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nr45z"] Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.629167 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.821425 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nbn8z"] Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.822924 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.825620 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.828699 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.828702 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nbn8z"] Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.984157 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-config-data\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.984753 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szzzk\" (UniqueName: \"kubernetes.io/projected/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-kube-api-access-szzzk\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.984825 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.984926 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-scripts\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.087166 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-config-data\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.087252 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szzzk\" (UniqueName: \"kubernetes.io/projected/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-kube-api-access-szzzk\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.087326 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.087401 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-scripts\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.105969 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-config-data\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.106151 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.109063 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-scripts\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.129749 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szzzk\" (UniqueName: \"kubernetes.io/projected/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-kube-api-access-szzzk\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.149974 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.246819 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nr45z" event={"ID":"70ce9757-cdf1-4864-95ad-9d25fb9830a9","Type":"ContainerStarted","Data":"021fb32c8f118be6cb115c199b5bccac76ab7b25e96dc7239f4fa322280c2c3c"} Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.246881 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nr45z" event={"ID":"70ce9757-cdf1-4864-95ad-9d25fb9830a9","Type":"ContainerStarted","Data":"7bfe1197486b22ee1f88d8b65bc65fff0d81ce7b626661bba38e8562901e7bb1"} Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.250953 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9c5be03-d36f-4a6a-8359-535ed4ad505d","Type":"ContainerStarted","Data":"ecb97b91d6f4ced51237b91313e01c9556310ff6ba6362c7e6f2808ce0a033d1"} Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.604860 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-nr45z" podStartSLOduration=2.604830699 podStartE2EDuration="2.604830699s" podCreationTimestamp="2026-02-19 05:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:32.266584897 +0000 UTC m=+1228.299907516" watchObservedRunningTime="2026-02-19 05:45:32.604830699 +0000 UTC m=+1228.638153268" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.640386 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.756617 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.786391 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647496cc8f-4z5vx"] Feb 19 05:45:32 crc kubenswrapper[5012]: W0219 05:45:32.797096 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb843c15_c78d_4b5e_91b3_31ec0befd9fe.slice/crio-ee7567e98958d50555ddbca81a211daa490b9d51437c109eb4b01f873305fc7c WatchSource:0}: Error finding container ee7567e98958d50555ddbca81a211daa490b9d51437c109eb4b01f873305fc7c: Status 404 returned error can't find the container with id ee7567e98958d50555ddbca81a211daa490b9d51437c109eb4b01f873305fc7c Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.815742 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.898020 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nbn8z"] Feb 19 05:45:33 crc kubenswrapper[5012]: I0219 05:45:33.272668 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de786391-8b45-4a24-9c56-2d4c86d5cfba","Type":"ContainerStarted","Data":"95592dc0a14ff2399f527d9c89ef6e64abfe874f4d909ae2d5fc00044b54d56c"} Feb 19 05:45:33 crc kubenswrapper[5012]: I0219 05:45:33.276278 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae","Type":"ContainerStarted","Data":"f59fd4a4ac42380c62e5cbec861422215a50132042a18819d7c99682128821ac"} Feb 19 05:45:33 crc kubenswrapper[5012]: I0219 05:45:33.281860 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" event={"ID":"7fc8fbb1-0e37-419f-86e0-6ce8db99225d","Type":"ContainerStarted","Data":"d98f7ee44c86e6ff53f7f25188347cf80c40d13a66e4ecf4956ac175a094de8b"} Feb 19 05:45:33 crc kubenswrapper[5012]: I0219 05:45:33.283766 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fb843c15-c78d-4b5e-91b3-31ec0befd9fe","Type":"ContainerStarted","Data":"ee7567e98958d50555ddbca81a211daa490b9d51437c109eb4b01f873305fc7c"} Feb 19 05:45:33 crc kubenswrapper[5012]: I0219 05:45:33.289883 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" event={"ID":"c1589f54-6631-4004-b2a9-e253b43b0644","Type":"ContainerStarted","Data":"8a23cad7dbe6ef631f80ea11b62d7b988e6b72ef836fd0ba728b4bc06cb53bf4"} Feb 19 05:45:33 crc kubenswrapper[5012]: I0219 05:45:33.599295 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 05:45:34 crc kubenswrapper[5012]: I0219 05:45:34.290645 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:34 crc kubenswrapper[5012]: I0219 05:45:34.306394 5012 generic.go:334] "Generic (PLEG): container finished" podID="c1589f54-6631-4004-b2a9-e253b43b0644" containerID="ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8" exitCode=0 Feb 19 05:45:34 crc kubenswrapper[5012]: I0219 05:45:34.306455 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" event={"ID":"c1589f54-6631-4004-b2a9-e253b43b0644","Type":"ContainerDied","Data":"ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8"} Feb 19 05:45:34 crc kubenswrapper[5012]: I0219 05:45:34.307383 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.344672 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de786391-8b45-4a24-9c56-2d4c86d5cfba","Type":"ContainerStarted","Data":"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3"} Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.347123 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" event={"ID":"7fc8fbb1-0e37-419f-86e0-6ce8db99225d","Type":"ContainerStarted","Data":"0b98797f9f7e97071d4699ed1c59c23ddac69aff6fb8708f48bdc42a56a8cf34"} Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.349400 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc" gracePeriod=30 Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.349500 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae","Type":"ContainerStarted","Data":"602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc"} Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.357137 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9c5be03-d36f-4a6a-8359-535ed4ad505d","Type":"ContainerStarted","Data":"bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b"} Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.368814 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" podStartSLOduration=4.368798819 podStartE2EDuration="4.368798819s" podCreationTimestamp="2026-02-19 05:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:35.360903717 +0000 UTC m=+1231.394226286" watchObservedRunningTime="2026-02-19 05:45:35.368798819 +0000 UTC m=+1231.402121388" Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.372123 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" event={"ID":"c1589f54-6631-4004-b2a9-e253b43b0644","Type":"ContainerStarted","Data":"401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb"} Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.373205 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.383503 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.193492346 podStartE2EDuration="5.383482306s" podCreationTimestamp="2026-02-19 05:45:30 +0000 UTC" firstStartedPulling="2026-02-19 05:45:32.633266301 +0000 UTC m=+1228.666588870" lastFinishedPulling="2026-02-19 05:45:34.823256261 +0000 UTC m=+1230.856578830" observedRunningTime="2026-02-19 05:45:35.375203805 +0000 UTC m=+1231.408526394" watchObservedRunningTime="2026-02-19 05:45:35.383482306 +0000 UTC m=+1231.416804895" Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.406172 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" podStartSLOduration=5.406142867 podStartE2EDuration="5.406142867s" podCreationTimestamp="2026-02-19 05:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:35.397773613 +0000 UTC m=+1231.431096182" watchObservedRunningTime="2026-02-19 05:45:35.406142867 +0000 UTC m=+1231.439465436" Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.102898 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.383720 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fb843c15-c78d-4b5e-91b3-31ec0befd9fe","Type":"ContainerStarted","Data":"afdc318ce7e7f31c55b83d198c0056a9143debe76f4068e0b8b55a3cd789f800"} Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.386613 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de786391-8b45-4a24-9c56-2d4c86d5cfba","Type":"ContainerStarted","Data":"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5"} Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.386730 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerName="nova-metadata-log" containerID="cri-o://c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3" gracePeriod=30 Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.386930 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerName="nova-metadata-metadata" containerID="cri-o://4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5" gracePeriod=30 Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.394510 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9c5be03-d36f-4a6a-8359-535ed4ad505d","Type":"ContainerStarted","Data":"92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42"} Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.410805 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.220773101 podStartE2EDuration="6.410769168s" podCreationTimestamp="2026-02-19 05:45:30 +0000 UTC" firstStartedPulling="2026-02-19 05:45:32.800622285 +0000 UTC m=+1228.833944854" lastFinishedPulling="2026-02-19 05:45:35.990618342 +0000 UTC m=+1232.023940921" observedRunningTime="2026-02-19 05:45:36.406950635 +0000 UTC m=+1232.440273204" watchObservedRunningTime="2026-02-19 05:45:36.410769168 +0000 UTC m=+1232.444091737" Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.432666 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.336128885 podStartE2EDuration="6.43263525s" podCreationTimestamp="2026-02-19 05:45:30 +0000 UTC" firstStartedPulling="2026-02-19 05:45:32.766894114 +0000 UTC m=+1228.800216683" lastFinishedPulling="2026-02-19 05:45:34.863400479 +0000 UTC m=+1230.896723048" observedRunningTime="2026-02-19 05:45:36.431864951 +0000 UTC m=+1232.465187530" watchObservedRunningTime="2026-02-19 05:45:36.43263525 +0000 UTC m=+1232.465957819" Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.459178 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.272823787 podStartE2EDuration="6.459146705s" podCreationTimestamp="2026-02-19 05:45:30 +0000 UTC" firstStartedPulling="2026-02-19 05:45:31.639994698 +0000 UTC m=+1227.673317267" lastFinishedPulling="2026-02-19 05:45:34.826317616 +0000 UTC m=+1230.859640185" observedRunningTime="2026-02-19 05:45:36.447105822 +0000 UTC m=+1232.480428411" watchObservedRunningTime="2026-02-19 05:45:36.459146705 +0000 UTC m=+1232.492469274" Feb 19 05:45:37 crc kubenswrapper[5012]: E0219 05:45:37.093259 5012 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde786391_8b45_4a24_9c56_2d4c86d5cfba.slice/crio-4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde786391_8b45_4a24_9c56_2d4c86d5cfba.slice/crio-conmon-4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5.scope\": RecentStats: unable to find data in memory cache]" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.363289 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.403156 5012 generic.go:334] "Generic (PLEG): container finished" podID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerID="4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5" exitCode=0 Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.403196 5012 generic.go:334] "Generic (PLEG): container finished" podID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerID="c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3" exitCode=143 Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.403278 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.404169 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de786391-8b45-4a24-9c56-2d4c86d5cfba","Type":"ContainerDied","Data":"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5"} Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.404224 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de786391-8b45-4a24-9c56-2d4c86d5cfba","Type":"ContainerDied","Data":"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3"} Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.404235 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de786391-8b45-4a24-9c56-2d4c86d5cfba","Type":"ContainerDied","Data":"95592dc0a14ff2399f527d9c89ef6e64abfe874f4d909ae2d5fc00044b54d56c"} Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.404253 5012 scope.go:117] "RemoveContainer" containerID="4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.442963 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-combined-ca-bundle\") pod \"de786391-8b45-4a24-9c56-2d4c86d5cfba\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.443104 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzrnl\" (UniqueName: \"kubernetes.io/projected/de786391-8b45-4a24-9c56-2d4c86d5cfba-kube-api-access-lzrnl\") pod \"de786391-8b45-4a24-9c56-2d4c86d5cfba\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.443135 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-config-data\") pod \"de786391-8b45-4a24-9c56-2d4c86d5cfba\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.443161 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de786391-8b45-4a24-9c56-2d4c86d5cfba-logs\") pod \"de786391-8b45-4a24-9c56-2d4c86d5cfba\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.444347 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de786391-8b45-4a24-9c56-2d4c86d5cfba-logs" (OuterVolumeSpecName: "logs") pod "de786391-8b45-4a24-9c56-2d4c86d5cfba" (UID: "de786391-8b45-4a24-9c56-2d4c86d5cfba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.448905 5012 scope.go:117] "RemoveContainer" containerID="c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.467797 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de786391-8b45-4a24-9c56-2d4c86d5cfba-kube-api-access-lzrnl" (OuterVolumeSpecName: "kube-api-access-lzrnl") pod "de786391-8b45-4a24-9c56-2d4c86d5cfba" (UID: "de786391-8b45-4a24-9c56-2d4c86d5cfba"). InnerVolumeSpecName "kube-api-access-lzrnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.485442 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de786391-8b45-4a24-9c56-2d4c86d5cfba" (UID: "de786391-8b45-4a24-9c56-2d4c86d5cfba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.526914 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-config-data" (OuterVolumeSpecName: "config-data") pod "de786391-8b45-4a24-9c56-2d4c86d5cfba" (UID: "de786391-8b45-4a24-9c56-2d4c86d5cfba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.545577 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzrnl\" (UniqueName: \"kubernetes.io/projected/de786391-8b45-4a24-9c56-2d4c86d5cfba-kube-api-access-lzrnl\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.545609 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.545621 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de786391-8b45-4a24-9c56-2d4c86d5cfba-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.545629 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.584548 5012 scope.go:117] "RemoveContainer" containerID="4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5" Feb 19 05:45:37 crc kubenswrapper[5012]: E0219 05:45:37.586092 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5\": container with ID starting with 4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5 not found: ID does not exist" containerID="4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.586279 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5"} err="failed to get container status \"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5\": rpc error: code = NotFound desc = could not find container \"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5\": container with ID starting with 4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5 not found: ID does not exist" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.586395 5012 scope.go:117] "RemoveContainer" containerID="c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3" Feb 19 05:45:37 crc kubenswrapper[5012]: E0219 05:45:37.586808 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3\": container with ID starting with c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3 not found: ID does not exist" containerID="c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.586829 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3"} err="failed to get container status \"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3\": rpc error: code = NotFound desc = could not find container \"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3\": container with ID starting with c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3 not found: ID does not exist" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.586842 5012 scope.go:117] "RemoveContainer" containerID="4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.587185 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5"} err="failed to get container status \"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5\": rpc error: code = NotFound desc = could not find container \"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5\": container with ID starting with 4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5 not found: ID does not exist" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.587209 5012 scope.go:117] "RemoveContainer" containerID="c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.587446 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3"} err="failed to get container status \"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3\": rpc error: code = NotFound desc = could not find container \"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3\": container with ID starting with c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3 not found: ID does not exist" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.738144 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.753180 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.768398 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:37 crc kubenswrapper[5012]: E0219 05:45:37.768888 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerName="nova-metadata-log" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.768905 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerName="nova-metadata-log" Feb 19 05:45:37 crc kubenswrapper[5012]: E0219 05:45:37.768932 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerName="nova-metadata-metadata" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.768939 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerName="nova-metadata-metadata" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.769136 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerName="nova-metadata-log" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.769159 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerName="nova-metadata-metadata" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.770764 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.776988 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.779356 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.820560 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.850905 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-config-data\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.851065 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f259e859-c226-472e-85d3-8b5a9c7ba66a-logs\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.851130 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.851154 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lzqd\" (UniqueName: \"kubernetes.io/projected/f259e859-c226-472e-85d3-8b5a9c7ba66a-kube-api-access-2lzqd\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.851207 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.953099 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f259e859-c226-472e-85d3-8b5a9c7ba66a-logs\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.953195 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.953234 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lzqd\" (UniqueName: \"kubernetes.io/projected/f259e859-c226-472e-85d3-8b5a9c7ba66a-kube-api-access-2lzqd\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.953291 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.953968 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-config-data\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.954160 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f259e859-c226-472e-85d3-8b5a9c7ba66a-logs\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.959240 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.959469 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.966071 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-config-data\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.972980 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lzqd\" (UniqueName: \"kubernetes.io/projected/f259e859-c226-472e-85d3-8b5a9c7ba66a-kube-api-access-2lzqd\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:38 crc kubenswrapper[5012]: I0219 05:45:38.095899 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:38 crc kubenswrapper[5012]: I0219 05:45:38.574827 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:45:38 crc kubenswrapper[5012]: I0219 05:45:38.575379 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6c04ef21-3d68-44e8-ba69-164f3b32b7a0" containerName="kube-state-metrics" containerID="cri-o://caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73" gracePeriod=30 Feb 19 05:45:38 crc kubenswrapper[5012]: I0219 05:45:38.585034 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:38 crc kubenswrapper[5012]: I0219 05:45:38.714083 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" path="/var/lib/kubelet/pods/de786391-8b45-4a24-9c56-2d4c86d5cfba/volumes" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.081910 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.179340 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcbl4\" (UniqueName: \"kubernetes.io/projected/6c04ef21-3d68-44e8-ba69-164f3b32b7a0-kube-api-access-jcbl4\") pod \"6c04ef21-3d68-44e8-ba69-164f3b32b7a0\" (UID: \"6c04ef21-3d68-44e8-ba69-164f3b32b7a0\") " Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.189224 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c04ef21-3d68-44e8-ba69-164f3b32b7a0-kube-api-access-jcbl4" (OuterVolumeSpecName: "kube-api-access-jcbl4") pod "6c04ef21-3d68-44e8-ba69-164f3b32b7a0" (UID: "6c04ef21-3d68-44e8-ba69-164f3b32b7a0"). InnerVolumeSpecName "kube-api-access-jcbl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.282714 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcbl4\" (UniqueName: \"kubernetes.io/projected/6c04ef21-3d68-44e8-ba69-164f3b32b7a0-kube-api-access-jcbl4\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.426342 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f259e859-c226-472e-85d3-8b5a9c7ba66a","Type":"ContainerStarted","Data":"a858fcdf83bd73b366127d94812fddc1ba76c33cc1bd9175a15eaf1cb800af07"} Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.426388 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f259e859-c226-472e-85d3-8b5a9c7ba66a","Type":"ContainerStarted","Data":"6d55bd48d5dc9a998906eb29d5ab2a1f6e9c30189d3ebafeea49dae332345272"} Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.426401 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f259e859-c226-472e-85d3-8b5a9c7ba66a","Type":"ContainerStarted","Data":"1f7a7e5be52d1531162f80520ab7a9bc9939afb0e0d82ef9812331d77771bcd1"} Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.427677 5012 generic.go:334] "Generic (PLEG): container finished" podID="6c04ef21-3d68-44e8-ba69-164f3b32b7a0" containerID="caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73" exitCode=2 Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.427725 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6c04ef21-3d68-44e8-ba69-164f3b32b7a0","Type":"ContainerDied","Data":"caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73"} Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.427735 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.427759 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6c04ef21-3d68-44e8-ba69-164f3b32b7a0","Type":"ContainerDied","Data":"1d78bdb8cd099c1e00c91080ebe4740fa66e2f4e7fc08f7ed987fe609d80ac23"} Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.427779 5012 scope.go:117] "RemoveContainer" containerID="caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.456870 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.456836832 podStartE2EDuration="2.456836832s" podCreationTimestamp="2026-02-19 05:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:39.450852177 +0000 UTC m=+1235.484174746" watchObservedRunningTime="2026-02-19 05:45:39.456836832 +0000 UTC m=+1235.490159401" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.465095 5012 scope.go:117] "RemoveContainer" containerID="caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73" Feb 19 05:45:39 crc kubenswrapper[5012]: E0219 05:45:39.465889 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73\": container with ID starting with caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73 not found: ID does not exist" containerID="caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.466067 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73"} err="failed to get container status \"caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73\": rpc error: code = NotFound desc = could not find container \"caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73\": container with ID starting with caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73 not found: ID does not exist" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.498414 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.536422 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.555913 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:45:39 crc kubenswrapper[5012]: E0219 05:45:39.556665 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c04ef21-3d68-44e8-ba69-164f3b32b7a0" containerName="kube-state-metrics" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.556680 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c04ef21-3d68-44e8-ba69-164f3b32b7a0" containerName="kube-state-metrics" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.557079 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c04ef21-3d68-44e8-ba69-164f3b32b7a0" containerName="kube-state-metrics" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.558147 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.561842 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.562254 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.568185 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.690809 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.690918 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.690982 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4vnx\" (UniqueName: \"kubernetes.io/projected/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-api-access-l4vnx\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.691048 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.793277 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.793384 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4vnx\" (UniqueName: \"kubernetes.io/projected/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-api-access-l4vnx\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.793457 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.793492 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.799726 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.800002 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.800915 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.822861 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4vnx\" (UniqueName: \"kubernetes.io/projected/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-api-access-l4vnx\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.895995 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.468012 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.715894 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c04ef21-3d68-44e8-ba69-164f3b32b7a0" path="/var/lib/kubelet/pods/6c04ef21-3d68-44e8-ba69-164f3b32b7a0/volumes" Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.716723 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.717110 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="ceilometer-central-agent" containerID="cri-o://9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5" gracePeriod=30 Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.717242 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="sg-core" containerID="cri-o://e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b" gracePeriod=30 Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.717311 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="proxy-httpd" containerID="cri-o://f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5" gracePeriod=30 Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.717351 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="ceilometer-notification-agent" containerID="cri-o://2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832" gracePeriod=30 Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.853561 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.853648 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.112608 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.203493 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl"] Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.203885 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" podUID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" containerName="dnsmasq-dns" containerID="cri-o://0665dea2b78f255d6fbccb798f4cfaab479a2e00f62ee271920f433e530bc5cb" gracePeriod=10 Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.213045 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.213112 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.247840 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.454417 5012 generic.go:334] "Generic (PLEG): container finished" podID="70ce9757-cdf1-4864-95ad-9d25fb9830a9" containerID="021fb32c8f118be6cb115c199b5bccac76ab7b25e96dc7239f4fa322280c2c3c" exitCode=0 Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.454535 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nr45z" event={"ID":"70ce9757-cdf1-4864-95ad-9d25fb9830a9","Type":"ContainerDied","Data":"021fb32c8f118be6cb115c199b5bccac76ab7b25e96dc7239f4fa322280c2c3c"} Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.456079 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44","Type":"ContainerStarted","Data":"17c75905f5e68679c3ca398b8340bbca140cca23afe8f2f0be29edbc0c8b934f"} Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.456129 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44","Type":"ContainerStarted","Data":"47a48bb3171a3d55ebe0ae1b610ada66d436c3aa3c7e61ea1720b03a90b1d619"} Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.456705 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.465803 5012 generic.go:334] "Generic (PLEG): container finished" podID="01803024-8b09-46a8-849a-7129e5734fc5" containerID="f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5" exitCode=0 Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.465834 5012 generic.go:334] "Generic (PLEG): container finished" podID="01803024-8b09-46a8-849a-7129e5734fc5" containerID="e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b" exitCode=2 Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.465841 5012 generic.go:334] "Generic (PLEG): container finished" podID="01803024-8b09-46a8-849a-7129e5734fc5" containerID="9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5" exitCode=0 Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.465878 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerDied","Data":"f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5"} Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.465897 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerDied","Data":"e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b"} Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.465907 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerDied","Data":"9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5"} Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.468594 5012 generic.go:334] "Generic (PLEG): container finished" podID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" containerID="0665dea2b78f255d6fbccb798f4cfaab479a2e00f62ee271920f433e530bc5cb" exitCode=0 Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.469380 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" event={"ID":"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e","Type":"ContainerDied","Data":"0665dea2b78f255d6fbccb798f4cfaab479a2e00f62ee271920f433e530bc5cb"} Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.514829 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.538358 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.082493658 podStartE2EDuration="2.538334632s" podCreationTimestamp="2026-02-19 05:45:39 +0000 UTC" firstStartedPulling="2026-02-19 05:45:40.48105089 +0000 UTC m=+1236.514373459" lastFinishedPulling="2026-02-19 05:45:40.936891874 +0000 UTC m=+1236.970214433" observedRunningTime="2026-02-19 05:45:41.5020943 +0000 UTC m=+1237.535416869" watchObservedRunningTime="2026-02-19 05:45:41.538334632 +0000 UTC m=+1237.571657201" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.784700 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.864482 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-nb\") pod \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.864597 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-svc\") pod \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.864829 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-swift-storage-0\") pod \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.864951 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-config\") pod \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.865042 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-sb\") pod \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.865199 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztbnr\" (UniqueName: \"kubernetes.io/projected/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-kube-api-access-ztbnr\") pod \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.875619 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-kube-api-access-ztbnr" (OuterVolumeSpecName: "kube-api-access-ztbnr") pod "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" (UID: "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e"). InnerVolumeSpecName "kube-api-access-ztbnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.921048 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" (UID: "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.925175 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" (UID: "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.926185 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-config" (OuterVolumeSpecName: "config") pod "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" (UID: "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.926847 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" (UID: "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.931511 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" (UID: "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.936476 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.936594 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.968390 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.968493 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.968548 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.968597 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.968644 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.968690 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztbnr\" (UniqueName: \"kubernetes.io/projected/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-kube-api-access-ztbnr\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.483016 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.483045 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" event={"ID":"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e","Type":"ContainerDied","Data":"5892f4877b405b9244dd43361effc1a470655536dbd633845dd04bd643dbfba5"} Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.483844 5012 scope.go:117] "RemoveContainer" containerID="0665dea2b78f255d6fbccb798f4cfaab479a2e00f62ee271920f433e530bc5cb" Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.541813 5012 scope.go:117] "RemoveContainer" containerID="ca6a3289326a3d74df11835a9c2f296bc10d31bbffc5d5c69c448a3f93f521ea" Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.555034 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl"] Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.572103 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl"] Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.753628 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" path="/var/lib/kubelet/pods/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e/volumes" Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.917129 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.997175 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-config-data\") pod \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.997337 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7wrx\" (UniqueName: \"kubernetes.io/projected/70ce9757-cdf1-4864-95ad-9d25fb9830a9-kube-api-access-m7wrx\") pod \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.997676 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-combined-ca-bundle\") pod \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.998161 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-scripts\") pod \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.003350 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-scripts" (OuterVolumeSpecName: "scripts") pod "70ce9757-cdf1-4864-95ad-9d25fb9830a9" (UID: "70ce9757-cdf1-4864-95ad-9d25fb9830a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.018180 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ce9757-cdf1-4864-95ad-9d25fb9830a9-kube-api-access-m7wrx" (OuterVolumeSpecName: "kube-api-access-m7wrx") pod "70ce9757-cdf1-4864-95ad-9d25fb9830a9" (UID: "70ce9757-cdf1-4864-95ad-9d25fb9830a9"). InnerVolumeSpecName "kube-api-access-m7wrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.032588 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-config-data" (OuterVolumeSpecName: "config-data") pod "70ce9757-cdf1-4864-95ad-9d25fb9830a9" (UID: "70ce9757-cdf1-4864-95ad-9d25fb9830a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.040129 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70ce9757-cdf1-4864-95ad-9d25fb9830a9" (UID: "70ce9757-cdf1-4864-95ad-9d25fb9830a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.096333 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.096392 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.101501 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.101552 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.101563 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.101575 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7wrx\" (UniqueName: \"kubernetes.io/projected/70ce9757-cdf1-4864-95ad-9d25fb9830a9-kube-api-access-m7wrx\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.510857 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nr45z" event={"ID":"70ce9757-cdf1-4864-95ad-9d25fb9830a9","Type":"ContainerDied","Data":"7bfe1197486b22ee1f88d8b65bc65fff0d81ce7b626661bba38e8562901e7bb1"} Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.511265 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bfe1197486b22ee1f88d8b65bc65fff0d81ce7b626661bba38e8562901e7bb1" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.511390 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.689341 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.689648 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-log" containerID="cri-o://bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b" gracePeriod=30 Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.689741 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-api" containerID="cri-o://92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42" gracePeriod=30 Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.776358 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.776650 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerName="nova-metadata-log" containerID="cri-o://6d55bd48d5dc9a998906eb29d5ab2a1f6e9c30189d3ebafeea49dae332345272" gracePeriod=30 Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.777239 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerName="nova-metadata-metadata" containerID="cri-o://a858fcdf83bd73b366127d94812fddc1ba76c33cc1bd9175a15eaf1cb800af07" gracePeriod=30 Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.789168 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.789738 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fb843c15-c78d-4b5e-91b3-31ec0befd9fe" containerName="nova-scheduler-scheduler" containerID="cri-o://afdc318ce7e7f31c55b83d198c0056a9143debe76f4068e0b8b55a3cd789f800" gracePeriod=30 Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.431728 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.431794 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.538549 5012 generic.go:334] "Generic (PLEG): container finished" podID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerID="a858fcdf83bd73b366127d94812fddc1ba76c33cc1bd9175a15eaf1cb800af07" exitCode=0 Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.538582 5012 generic.go:334] "Generic (PLEG): container finished" podID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerID="6d55bd48d5dc9a998906eb29d5ab2a1f6e9c30189d3ebafeea49dae332345272" exitCode=143 Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.538637 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f259e859-c226-472e-85d3-8b5a9c7ba66a","Type":"ContainerDied","Data":"a858fcdf83bd73b366127d94812fddc1ba76c33cc1bd9175a15eaf1cb800af07"} Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.538666 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f259e859-c226-472e-85d3-8b5a9c7ba66a","Type":"ContainerDied","Data":"6d55bd48d5dc9a998906eb29d5ab2a1f6e9c30189d3ebafeea49dae332345272"} Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.544294 5012 generic.go:334] "Generic (PLEG): container finished" podID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerID="bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b" exitCode=143 Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.544353 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9c5be03-d36f-4a6a-8359-535ed4ad505d","Type":"ContainerDied","Data":"bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b"} Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.763575 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.837880 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-config-data\") pod \"f259e859-c226-472e-85d3-8b5a9c7ba66a\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.837934 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f259e859-c226-472e-85d3-8b5a9c7ba66a-logs\") pod \"f259e859-c226-472e-85d3-8b5a9c7ba66a\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.838017 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-combined-ca-bundle\") pod \"f259e859-c226-472e-85d3-8b5a9c7ba66a\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.838141 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lzqd\" (UniqueName: \"kubernetes.io/projected/f259e859-c226-472e-85d3-8b5a9c7ba66a-kube-api-access-2lzqd\") pod \"f259e859-c226-472e-85d3-8b5a9c7ba66a\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.838177 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-nova-metadata-tls-certs\") pod \"f259e859-c226-472e-85d3-8b5a9c7ba66a\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.839693 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f259e859-c226-472e-85d3-8b5a9c7ba66a-logs" (OuterVolumeSpecName: "logs") pod "f259e859-c226-472e-85d3-8b5a9c7ba66a" (UID: "f259e859-c226-472e-85d3-8b5a9c7ba66a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.846515 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f259e859-c226-472e-85d3-8b5a9c7ba66a-kube-api-access-2lzqd" (OuterVolumeSpecName: "kube-api-access-2lzqd") pod "f259e859-c226-472e-85d3-8b5a9c7ba66a" (UID: "f259e859-c226-472e-85d3-8b5a9c7ba66a"). InnerVolumeSpecName "kube-api-access-2lzqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.879519 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f259e859-c226-472e-85d3-8b5a9c7ba66a" (UID: "f259e859-c226-472e-85d3-8b5a9c7ba66a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.891511 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-config-data" (OuterVolumeSpecName: "config-data") pod "f259e859-c226-472e-85d3-8b5a9c7ba66a" (UID: "f259e859-c226-472e-85d3-8b5a9c7ba66a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.902847 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f259e859-c226-472e-85d3-8b5a9c7ba66a" (UID: "f259e859-c226-472e-85d3-8b5a9c7ba66a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.940080 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.940116 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f259e859-c226-472e-85d3-8b5a9c7ba66a-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.940125 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.940136 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lzqd\" (UniqueName: \"kubernetes.io/projected/f259e859-c226-472e-85d3-8b5a9c7ba66a-kube-api-access-2lzqd\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.940146 5012 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.563348 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f259e859-c226-472e-85d3-8b5a9c7ba66a","Type":"ContainerDied","Data":"1f7a7e5be52d1531162f80520ab7a9bc9939afb0e0d82ef9812331d77771bcd1"} Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.563869 5012 scope.go:117] "RemoveContainer" containerID="a858fcdf83bd73b366127d94812fddc1ba76c33cc1bd9175a15eaf1cb800af07" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.563510 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.569125 5012 generic.go:334] "Generic (PLEG): container finished" podID="7fc8fbb1-0e37-419f-86e0-6ce8db99225d" containerID="0b98797f9f7e97071d4699ed1c59c23ddac69aff6fb8708f48bdc42a56a8cf34" exitCode=0 Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.569155 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" event={"ID":"7fc8fbb1-0e37-419f-86e0-6ce8db99225d","Type":"ContainerDied","Data":"0b98797f9f7e97071d4699ed1c59c23ddac69aff6fb8708f48bdc42a56a8cf34"} Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.601043 5012 scope.go:117] "RemoveContainer" containerID="6d55bd48d5dc9a998906eb29d5ab2a1f6e9c30189d3ebafeea49dae332345272" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.623941 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.642683 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652090 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:45 crc kubenswrapper[5012]: E0219 05:45:45.652523 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerName="nova-metadata-log" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652544 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerName="nova-metadata-log" Feb 19 05:45:45 crc kubenswrapper[5012]: E0219 05:45:45.652567 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ce9757-cdf1-4864-95ad-9d25fb9830a9" containerName="nova-manage" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652576 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ce9757-cdf1-4864-95ad-9d25fb9830a9" containerName="nova-manage" Feb 19 05:45:45 crc kubenswrapper[5012]: E0219 05:45:45.652594 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" containerName="dnsmasq-dns" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652600 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" containerName="dnsmasq-dns" Feb 19 05:45:45 crc kubenswrapper[5012]: E0219 05:45:45.652617 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerName="nova-metadata-metadata" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652623 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerName="nova-metadata-metadata" Feb 19 05:45:45 crc kubenswrapper[5012]: E0219 05:45:45.652646 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" containerName="init" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652652 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" containerName="init" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652845 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ce9757-cdf1-4864-95ad-9d25fb9830a9" containerName="nova-manage" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652860 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerName="nova-metadata-metadata" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652870 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" containerName="dnsmasq-dns" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652886 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerName="nova-metadata-log" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.653908 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.656952 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.657192 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.660504 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.866191 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-logs\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.867064 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-config-data\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.867450 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.867566 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9rh7\" (UniqueName: \"kubernetes.io/projected/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-kube-api-access-q9rh7\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.867637 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.968903 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-logs\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.969011 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-config-data\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.969042 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.969087 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9rh7\" (UniqueName: \"kubernetes.io/projected/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-kube-api-access-q9rh7\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.969112 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.969413 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-logs\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.975281 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-config-data\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.976043 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.976987 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.989685 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9rh7\" (UniqueName: \"kubernetes.io/projected/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-kube-api-access-q9rh7\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:46 crc kubenswrapper[5012]: E0219 05:45:46.213844 5012 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="afdc318ce7e7f31c55b83d198c0056a9143debe76f4068e0b8b55a3cd789f800" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 05:45:46 crc kubenswrapper[5012]: E0219 05:45:46.218880 5012 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="afdc318ce7e7f31c55b83d198c0056a9143debe76f4068e0b8b55a3cd789f800" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 05:45:46 crc kubenswrapper[5012]: E0219 05:45:46.221784 5012 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="afdc318ce7e7f31c55b83d198c0056a9143debe76f4068e0b8b55a3cd789f800" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 05:45:46 crc kubenswrapper[5012]: E0219 05:45:46.221862 5012 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="fb843c15-c78d-4b5e-91b3-31ec0befd9fe" containerName="nova-scheduler-scheduler" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.279864 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.392838 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.591677 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-config-data\") pod \"01803024-8b09-46a8-849a-7129e5734fc5\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.593510 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-run-httpd\") pod \"01803024-8b09-46a8-849a-7129e5734fc5\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.593615 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-scripts\") pod \"01803024-8b09-46a8-849a-7129e5734fc5\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.594894 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "01803024-8b09-46a8-849a-7129e5734fc5" (UID: "01803024-8b09-46a8-849a-7129e5734fc5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.597061 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2zr6\" (UniqueName: \"kubernetes.io/projected/01803024-8b09-46a8-849a-7129e5734fc5-kube-api-access-v2zr6\") pod \"01803024-8b09-46a8-849a-7129e5734fc5\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.597712 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-combined-ca-bundle\") pod \"01803024-8b09-46a8-849a-7129e5734fc5\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.597760 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-sg-core-conf-yaml\") pod \"01803024-8b09-46a8-849a-7129e5734fc5\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.597982 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-log-httpd\") pod \"01803024-8b09-46a8-849a-7129e5734fc5\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.598867 5012 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.602746 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "01803024-8b09-46a8-849a-7129e5734fc5" (UID: "01803024-8b09-46a8-849a-7129e5734fc5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.603426 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01803024-8b09-46a8-849a-7129e5734fc5-kube-api-access-v2zr6" (OuterVolumeSpecName: "kube-api-access-v2zr6") pod "01803024-8b09-46a8-849a-7129e5734fc5" (UID: "01803024-8b09-46a8-849a-7129e5734fc5"). InnerVolumeSpecName "kube-api-access-v2zr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.603529 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-scripts" (OuterVolumeSpecName: "scripts") pod "01803024-8b09-46a8-849a-7129e5734fc5" (UID: "01803024-8b09-46a8-849a-7129e5734fc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.604675 5012 generic.go:334] "Generic (PLEG): container finished" podID="01803024-8b09-46a8-849a-7129e5734fc5" containerID="2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832" exitCode=0 Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.604852 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerDied","Data":"2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832"} Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.604907 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerDied","Data":"1a4c3e21ec02a97624b92d231eadc367c369bbe32cd7bae830f477cfab60fbad"} Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.604943 5012 scope.go:117] "RemoveContainer" containerID="f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.605167 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.640746 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "01803024-8b09-46a8-849a-7129e5734fc5" (UID: "01803024-8b09-46a8-849a-7129e5734fc5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.676121 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01803024-8b09-46a8-849a-7129e5734fc5" (UID: "01803024-8b09-46a8-849a-7129e5734fc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.700707 5012 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.700811 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.700876 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2zr6\" (UniqueName: \"kubernetes.io/projected/01803024-8b09-46a8-849a-7129e5734fc5-kube-api-access-v2zr6\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.700949 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.701012 5012 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.714346 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" path="/var/lib/kubelet/pods/f259e859-c226-472e-85d3-8b5a9c7ba66a/volumes" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.732562 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-config-data" (OuterVolumeSpecName: "config-data") pod "01803024-8b09-46a8-849a-7129e5734fc5" (UID: "01803024-8b09-46a8-849a-7129e5734fc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.804109 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.807719 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.831631 5012 scope.go:117] "RemoveContainer" containerID="e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b" Feb 19 05:45:46 crc kubenswrapper[5012]: W0219 05:45:46.841169 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dbca55d_fe7e_4a74_a25c_8c495eb29e3b.slice/crio-d5151fe8a2179cf3ec35bc35e025b6f051659ce0400cbb15c53f153c34909628 WatchSource:0}: Error finding container d5151fe8a2179cf3ec35bc35e025b6f051659ce0400cbb15c53f153c34909628: Status 404 returned error can't find the container with id d5151fe8a2179cf3ec35bc35e025b6f051659ce0400cbb15c53f153c34909628 Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.883408 5012 scope.go:117] "RemoveContainer" containerID="2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.940720 5012 scope.go:117] "RemoveContainer" containerID="9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.958839 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.972363 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.989573 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.999084 5012 scope.go:117] "RemoveContainer" containerID="f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5" Feb 19 05:45:46 crc kubenswrapper[5012]: E0219 05:45:46.999900 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5\": container with ID starting with f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5 not found: ID does not exist" containerID="f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.000002 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5"} err="failed to get container status \"f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5\": rpc error: code = NotFound desc = could not find container \"f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5\": container with ID starting with f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5 not found: ID does not exist" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.000099 5012 scope.go:117] "RemoveContainer" containerID="e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.000620 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b\": container with ID starting with e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b not found: ID does not exist" containerID="e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.000664 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b"} err="failed to get container status \"e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b\": rpc error: code = NotFound desc = could not find container \"e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b\": container with ID starting with e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b not found: ID does not exist" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.000694 5012 scope.go:117] "RemoveContainer" containerID="2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.001002 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832\": container with ID starting with 2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832 not found: ID does not exist" containerID="2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.001044 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832"} err="failed to get container status \"2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832\": rpc error: code = NotFound desc = could not find container \"2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832\": container with ID starting with 2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832 not found: ID does not exist" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.001074 5012 scope.go:117] "RemoveContainer" containerID="9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.002984 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5\": container with ID starting with 9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5 not found: ID does not exist" containerID="9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.003009 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5"} err="failed to get container status \"9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5\": rpc error: code = NotFound desc = could not find container \"9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5\": container with ID starting with 9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5 not found: ID does not exist" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.006559 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-config-data\") pod \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.006595 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-combined-ca-bundle\") pod \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.006660 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szzzk\" (UniqueName: \"kubernetes.io/projected/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-kube-api-access-szzzk\") pod \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.006687 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-scripts\") pod \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.008850 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.009319 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="ceilometer-notification-agent" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009337 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="ceilometer-notification-agent" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.009352 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc8fbb1-0e37-419f-86e0-6ce8db99225d" containerName="nova-cell1-conductor-db-sync" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009360 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc8fbb1-0e37-419f-86e0-6ce8db99225d" containerName="nova-cell1-conductor-db-sync" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.009386 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="sg-core" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009394 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="sg-core" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.009410 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="ceilometer-central-agent" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009416 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="ceilometer-central-agent" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.009430 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="proxy-httpd" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009436 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="proxy-httpd" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009613 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc8fbb1-0e37-419f-86e0-6ce8db99225d" containerName="nova-cell1-conductor-db-sync" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009628 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="sg-core" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009643 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="ceilometer-central-agent" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009654 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="proxy-httpd" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009661 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="ceilometer-notification-agent" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.011588 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.017118 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.017967 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.018652 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.020550 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.027773 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-kube-api-access-szzzk" (OuterVolumeSpecName: "kube-api-access-szzzk") pod "7fc8fbb1-0e37-419f-86e0-6ce8db99225d" (UID: "7fc8fbb1-0e37-419f-86e0-6ce8db99225d"). InnerVolumeSpecName "kube-api-access-szzzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.034249 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-scripts" (OuterVolumeSpecName: "scripts") pod "7fc8fbb1-0e37-419f-86e0-6ce8db99225d" (UID: "7fc8fbb1-0e37-419f-86e0-6ce8db99225d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.054463 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-config-data" (OuterVolumeSpecName: "config-data") pod "7fc8fbb1-0e37-419f-86e0-6ce8db99225d" (UID: "7fc8fbb1-0e37-419f-86e0-6ce8db99225d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.071897 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fc8fbb1-0e37-419f-86e0-6ce8db99225d" (UID: "7fc8fbb1-0e37-419f-86e0-6ce8db99225d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.109416 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-log-httpd\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.109502 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-run-httpd\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.109536 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.109630 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.109755 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.109816 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpktb\" (UniqueName: \"kubernetes.io/projected/27805340-8269-4d8f-9183-b1cb339fea39-kube-api-access-vpktb\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.109898 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-config-data\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.109916 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-scripts\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.110177 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.110201 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.110229 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szzzk\" (UniqueName: \"kubernetes.io/projected/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-kube-api-access-szzzk\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.110238 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.158697 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.211400 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-config-data\") pod \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.211826 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzdm8\" (UniqueName: \"kubernetes.io/projected/b9c5be03-d36f-4a6a-8359-535ed4ad505d-kube-api-access-rzdm8\") pod \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.211932 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c5be03-d36f-4a6a-8359-535ed4ad505d-logs\") pod \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.212247 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-combined-ca-bundle\") pod \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.212980 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9c5be03-d36f-4a6a-8359-535ed4ad505d-logs" (OuterVolumeSpecName: "logs") pod "b9c5be03-d36f-4a6a-8359-535ed4ad505d" (UID: "b9c5be03-d36f-4a6a-8359-535ed4ad505d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.213780 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.213876 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpktb\" (UniqueName: \"kubernetes.io/projected/27805340-8269-4d8f-9183-b1cb339fea39-kube-api-access-vpktb\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.213906 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-scripts\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.213944 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-config-data\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.214029 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-log-httpd\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.214065 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-run-httpd\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.214109 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.214173 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.214265 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c5be03-d36f-4a6a-8359-535ed4ad505d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.215700 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-log-httpd\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.215783 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c5be03-d36f-4a6a-8359-535ed4ad505d-kube-api-access-rzdm8" (OuterVolumeSpecName: "kube-api-access-rzdm8") pod "b9c5be03-d36f-4a6a-8359-535ed4ad505d" (UID: "b9c5be03-d36f-4a6a-8359-535ed4ad505d"). InnerVolumeSpecName "kube-api-access-rzdm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.216442 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-run-httpd\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.218130 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.218842 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-scripts\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.219447 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.247477 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-config-data\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.247981 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.250518 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpktb\" (UniqueName: \"kubernetes.io/projected/27805340-8269-4d8f-9183-b1cb339fea39-kube-api-access-vpktb\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.275640 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9c5be03-d36f-4a6a-8359-535ed4ad505d" (UID: "b9c5be03-d36f-4a6a-8359-535ed4ad505d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.283813 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-config-data" (OuterVolumeSpecName: "config-data") pod "b9c5be03-d36f-4a6a-8359-535ed4ad505d" (UID: "b9c5be03-d36f-4a6a-8359-535ed4ad505d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.315111 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzdm8\" (UniqueName: \"kubernetes.io/projected/b9c5be03-d36f-4a6a-8359-535ed4ad505d-kube-api-access-rzdm8\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.315143 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.315152 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.334037 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.658992 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" event={"ID":"7fc8fbb1-0e37-419f-86e0-6ce8db99225d","Type":"ContainerDied","Data":"d98f7ee44c86e6ff53f7f25188347cf80c40d13a66e4ecf4956ac175a094de8b"} Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.659059 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d98f7ee44c86e6ff53f7f25188347cf80c40d13a66e4ecf4956ac175a094de8b" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.659073 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.663665 5012 generic.go:334] "Generic (PLEG): container finished" podID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerID="92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42" exitCode=0 Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.663729 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.663758 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9c5be03-d36f-4a6a-8359-535ed4ad505d","Type":"ContainerDied","Data":"92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42"} Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.663804 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9c5be03-d36f-4a6a-8359-535ed4ad505d","Type":"ContainerDied","Data":"ecb97b91d6f4ced51237b91313e01c9556310ff6ba6362c7e6f2808ce0a033d1"} Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.663827 5012 scope.go:117] "RemoveContainer" containerID="92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.692615 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.694022 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-log" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.694048 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-log" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.694091 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-api" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.694099 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-api" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.694724 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-api" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.694794 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-log" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.696248 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.705531 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b","Type":"ContainerStarted","Data":"ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3"} Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.705707 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b","Type":"ContainerStarted","Data":"6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b"} Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.705835 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b","Type":"ContainerStarted","Data":"d5151fe8a2179cf3ec35bc35e025b6f051659ce0400cbb15c53f153c34909628"} Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.711439 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.713390 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.716405 5012 scope.go:117] "RemoveContainer" containerID="bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.761331 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.761311305 podStartE2EDuration="2.761311305s" podCreationTimestamp="2026-02-19 05:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:47.746737041 +0000 UTC m=+1243.780059610" watchObservedRunningTime="2026-02-19 05:45:47.761311305 +0000 UTC m=+1243.794633874" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.779169 5012 scope.go:117] "RemoveContainer" containerID="92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.783778 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42\": container with ID starting with 92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42 not found: ID does not exist" containerID="92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.783828 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42"} err="failed to get container status \"92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42\": rpc error: code = NotFound desc = could not find container \"92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42\": container with ID starting with 92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42 not found: ID does not exist" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.783852 5012 scope.go:117] "RemoveContainer" containerID="bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.784280 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b\": container with ID starting with bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b not found: ID does not exist" containerID="bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.784340 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b"} err="failed to get container status \"bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b\": rpc error: code = NotFound desc = could not find container \"bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b\": container with ID starting with bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b not found: ID does not exist" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.796611 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.827470 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.838554 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aceef718-9d1c-441d-bf1b-92c0a6831def-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.838778 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aceef718-9d1c-441d-bf1b-92c0a6831def-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.838820 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkj22\" (UniqueName: \"kubernetes.io/projected/aceef718-9d1c-441d-bf1b-92c0a6831def-kube-api-access-gkj22\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.843797 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.846247 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.850742 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.860420 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.877347 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.941848 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aceef718-9d1c-441d-bf1b-92c0a6831def-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.942343 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aceef718-9d1c-441d-bf1b-92c0a6831def-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.942539 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkj22\" (UniqueName: \"kubernetes.io/projected/aceef718-9d1c-441d-bf1b-92c0a6831def-kube-api-access-gkj22\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.949422 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aceef718-9d1c-441d-bf1b-92c0a6831def-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.951961 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aceef718-9d1c-441d-bf1b-92c0a6831def-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.964014 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkj22\" (UniqueName: \"kubernetes.io/projected/aceef718-9d1c-441d-bf1b-92c0a6831def-kube-api-access-gkj22\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.041964 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.044600 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-config-data\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.044684 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-logs\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.044729 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9cwp\" (UniqueName: \"kubernetes.io/projected/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-kube-api-access-z9cwp\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.044766 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.152985 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-config-data\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.153122 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-logs\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.153206 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9cwp\" (UniqueName: \"kubernetes.io/projected/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-kube-api-access-z9cwp\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.153287 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.154184 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-logs\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.162409 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-config-data\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.163661 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.190822 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9cwp\" (UniqueName: \"kubernetes.io/projected/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-kube-api-access-z9cwp\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.476034 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.604630 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.718436 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01803024-8b09-46a8-849a-7129e5734fc5" path="/var/lib/kubelet/pods/01803024-8b09-46a8-849a-7129e5734fc5/volumes" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.719571 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" path="/var/lib/kubelet/pods/b9c5be03-d36f-4a6a-8359-535ed4ad505d/volumes" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.724293 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerStarted","Data":"b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817"} Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.724386 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerStarted","Data":"f47fd647e08594f727d5b6f46aed06b94634943796a315baee684b47c07fa5fe"} Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.725544 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"aceef718-9d1c-441d-bf1b-92c0a6831def","Type":"ContainerStarted","Data":"16414366c7851ccc17ca0f98a4981455028596ffb91ab593396dd0823211ab8f"} Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.727855 5012 generic.go:334] "Generic (PLEG): container finished" podID="fb843c15-c78d-4b5e-91b3-31ec0befd9fe" containerID="afdc318ce7e7f31c55b83d198c0056a9143debe76f4068e0b8b55a3cd789f800" exitCode=0 Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.727963 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fb843c15-c78d-4b5e-91b3-31ec0befd9fe","Type":"ContainerDied","Data":"afdc318ce7e7f31c55b83d198c0056a9143debe76f4068e0b8b55a3cd789f800"} Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.727983 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fb843c15-c78d-4b5e-91b3-31ec0befd9fe","Type":"ContainerDied","Data":"ee7567e98958d50555ddbca81a211daa490b9d51437c109eb4b01f873305fc7c"} Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.728099 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee7567e98958d50555ddbca81a211daa490b9d51437c109eb4b01f873305fc7c" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.765340 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.868127 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-combined-ca-bundle\") pod \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.868346 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj8q6\" (UniqueName: \"kubernetes.io/projected/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-kube-api-access-fj8q6\") pod \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.868464 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-config-data\") pod \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.878989 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-kube-api-access-fj8q6" (OuterVolumeSpecName: "kube-api-access-fj8q6") pod "fb843c15-c78d-4b5e-91b3-31ec0befd9fe" (UID: "fb843c15-c78d-4b5e-91b3-31ec0befd9fe"). InnerVolumeSpecName "kube-api-access-fj8q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.898942 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-config-data" (OuterVolumeSpecName: "config-data") pod "fb843c15-c78d-4b5e-91b3-31ec0befd9fe" (UID: "fb843c15-c78d-4b5e-91b3-31ec0befd9fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.899632 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb843c15-c78d-4b5e-91b3-31ec0befd9fe" (UID: "fb843c15-c78d-4b5e-91b3-31ec0befd9fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.967034 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.970433 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.970464 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj8q6\" (UniqueName: \"kubernetes.io/projected/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-kube-api-access-fj8q6\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.970476 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.744781 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerStarted","Data":"84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846"} Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.745081 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerStarted","Data":"7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f"} Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.748864 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"aceef718-9d1c-441d-bf1b-92c0a6831def","Type":"ContainerStarted","Data":"df6288f11c34ee3d8f152b2cd2ea6131e77ec5aaa909cdee0c8b10ce416c10dc"} Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.749520 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.752387 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06","Type":"ContainerStarted","Data":"7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a"} Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.752454 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06","Type":"ContainerStarted","Data":"e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721"} Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.752475 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06","Type":"ContainerStarted","Data":"cf097bcda3c824344ff950b961502394eb3df8a35294e9a44c6a1c8d3d85a714"} Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.752406 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.785362 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.785341605 podStartE2EDuration="2.785341605s" podCreationTimestamp="2026-02-19 05:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:49.776410218 +0000 UTC m=+1245.809732817" watchObservedRunningTime="2026-02-19 05:45:49.785341605 +0000 UTC m=+1245.818664184" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.810096 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.809293908 podStartE2EDuration="2.809293908s" podCreationTimestamp="2026-02-19 05:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:49.803359474 +0000 UTC m=+1245.836682043" watchObservedRunningTime="2026-02-19 05:45:49.809293908 +0000 UTC m=+1245.842616487" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.835446 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.849342 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.874285 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:49 crc kubenswrapper[5012]: E0219 05:45:49.874791 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb843c15-c78d-4b5e-91b3-31ec0befd9fe" containerName="nova-scheduler-scheduler" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.874807 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb843c15-c78d-4b5e-91b3-31ec0befd9fe" containerName="nova-scheduler-scheduler" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.875006 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb843c15-c78d-4b5e-91b3-31ec0befd9fe" containerName="nova-scheduler-scheduler" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.875728 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.884688 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.933254 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.933326 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.004850 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj24n\" (UniqueName: \"kubernetes.io/projected/96352ff3-accb-4fd1-8fa4-eec10f340eaf-kube-api-access-gj24n\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.004909 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.004948 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-config-data\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.107564 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj24n\" (UniqueName: \"kubernetes.io/projected/96352ff3-accb-4fd1-8fa4-eec10f340eaf-kube-api-access-gj24n\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.107655 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.107725 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-config-data\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.115511 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-config-data\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.119888 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.125153 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj24n\" (UniqueName: \"kubernetes.io/projected/96352ff3-accb-4fd1-8fa4-eec10f340eaf-kube-api-access-gj24n\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.214891 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.661908 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:50 crc kubenswrapper[5012]: W0219 05:45:50.665773 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96352ff3_accb_4fd1_8fa4_eec10f340eaf.slice/crio-0efe32e1979dd31ba39916aaeb5dec48666e72ade534d761ecc6b79a3666cbef WatchSource:0}: Error finding container 0efe32e1979dd31ba39916aaeb5dec48666e72ade534d761ecc6b79a3666cbef: Status 404 returned error can't find the container with id 0efe32e1979dd31ba39916aaeb5dec48666e72ade534d761ecc6b79a3666cbef Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.714587 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb843c15-c78d-4b5e-91b3-31ec0befd9fe" path="/var/lib/kubelet/pods/fb843c15-c78d-4b5e-91b3-31ec0befd9fe/volumes" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.766000 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96352ff3-accb-4fd1-8fa4-eec10f340eaf","Type":"ContainerStarted","Data":"0efe32e1979dd31ba39916aaeb5dec48666e72ade534d761ecc6b79a3666cbef"} Feb 19 05:45:51 crc kubenswrapper[5012]: I0219 05:45:51.280581 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 05:45:51 crc kubenswrapper[5012]: I0219 05:45:51.280995 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 05:45:51 crc kubenswrapper[5012]: I0219 05:45:51.781796 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerStarted","Data":"fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f"} Feb 19 05:45:51 crc kubenswrapper[5012]: I0219 05:45:51.783152 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 05:45:51 crc kubenswrapper[5012]: I0219 05:45:51.785174 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96352ff3-accb-4fd1-8fa4-eec10f340eaf","Type":"ContainerStarted","Data":"2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c"} Feb 19 05:45:51 crc kubenswrapper[5012]: I0219 05:45:51.807859 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.823430095 podStartE2EDuration="5.807818758s" podCreationTimestamp="2026-02-19 05:45:46 +0000 UTC" firstStartedPulling="2026-02-19 05:45:47.813806273 +0000 UTC m=+1243.847128842" lastFinishedPulling="2026-02-19 05:45:50.798194926 +0000 UTC m=+1246.831517505" observedRunningTime="2026-02-19 05:45:51.806842915 +0000 UTC m=+1247.840165514" watchObservedRunningTime="2026-02-19 05:45:51.807818758 +0000 UTC m=+1247.841141367" Feb 19 05:45:51 crc kubenswrapper[5012]: I0219 05:45:51.836127 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.8361057670000003 podStartE2EDuration="2.836105767s" podCreationTimestamp="2026-02-19 05:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:51.827579399 +0000 UTC m=+1247.860901988" watchObservedRunningTime="2026-02-19 05:45:51.836105767 +0000 UTC m=+1247.869428336" Feb 19 05:45:53 crc kubenswrapper[5012]: I0219 05:45:53.079284 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:55 crc kubenswrapper[5012]: I0219 05:45:55.215762 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 05:45:56 crc kubenswrapper[5012]: I0219 05:45:56.280535 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 05:45:56 crc kubenswrapper[5012]: I0219 05:45:56.280634 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 05:45:57 crc kubenswrapper[5012]: I0219 05:45:57.295545 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 05:45:57 crc kubenswrapper[5012]: I0219 05:45:57.295634 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 05:45:58 crc kubenswrapper[5012]: I0219 05:45:58.477064 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 05:45:58 crc kubenswrapper[5012]: I0219 05:45:58.477178 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 05:45:59 crc kubenswrapper[5012]: I0219 05:45:59.559501 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.217:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 05:45:59 crc kubenswrapper[5012]: I0219 05:45:59.560604 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.217:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 05:46:00 crc kubenswrapper[5012]: I0219 05:46:00.215570 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 05:46:00 crc kubenswrapper[5012]: I0219 05:46:00.270930 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 05:46:00 crc kubenswrapper[5012]: I0219 05:46:00.972176 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 05:46:05 crc kubenswrapper[5012]: I0219 05:46:05.926353 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.007299 5012 generic.go:334] "Generic (PLEG): container finished" podID="d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" containerID="602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc" exitCode=137 Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.007425 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae","Type":"ContainerDied","Data":"602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc"} Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.007429 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.007480 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae","Type":"ContainerDied","Data":"f59fd4a4ac42380c62e5cbec861422215a50132042a18819d7c99682128821ac"} Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.007517 5012 scope.go:117] "RemoveContainer" containerID="602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.030067 5012 scope.go:117] "RemoveContainer" containerID="602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc" Feb 19 05:46:06 crc kubenswrapper[5012]: E0219 05:46:06.031423 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc\": container with ID starting with 602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc not found: ID does not exist" containerID="602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.031476 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc"} err="failed to get container status \"602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc\": rpc error: code = NotFound desc = could not find container \"602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc\": container with ID starting with 602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc not found: ID does not exist" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.090451 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-combined-ca-bundle\") pod \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.090613 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzcfh\" (UniqueName: \"kubernetes.io/projected/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-kube-api-access-kzcfh\") pod \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.090656 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-config-data\") pod \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.098619 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-kube-api-access-kzcfh" (OuterVolumeSpecName: "kube-api-access-kzcfh") pod "d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" (UID: "d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae"). InnerVolumeSpecName "kube-api-access-kzcfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.128715 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" (UID: "d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.154363 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-config-data" (OuterVolumeSpecName: "config-data") pod "d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" (UID: "d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.194192 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzcfh\" (UniqueName: \"kubernetes.io/projected/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-kube-api-access-kzcfh\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.194225 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.194238 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.287192 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.287841 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.300676 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.380702 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.398426 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.413465 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:46:06 crc kubenswrapper[5012]: E0219 05:46:06.414191 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.414221 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.414609 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.415787 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.419085 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.419532 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.419759 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.424099 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.502191 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.502440 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.502503 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.502606 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrvq4\" (UniqueName: \"kubernetes.io/projected/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-kube-api-access-vrvq4\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.502692 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.605927 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.606080 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrvq4\" (UniqueName: \"kubernetes.io/projected/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-kube-api-access-vrvq4\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.606167 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.606378 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.606469 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.613761 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.613900 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.616270 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.617613 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.638708 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrvq4\" (UniqueName: \"kubernetes.io/projected/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-kube-api-access-vrvq4\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.723917 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" path="/var/lib/kubelet/pods/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae/volumes" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.739398 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:07 crc kubenswrapper[5012]: I0219 05:46:07.043772 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 05:46:07 crc kubenswrapper[5012]: I0219 05:46:07.058276 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.047332 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21","Type":"ContainerStarted","Data":"61755fb4f9526f0f09c3360af6043a794ad06789527403b495768dabab0f4b32"} Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.047931 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21","Type":"ContainerStarted","Data":"b11a2b0cf303cbc48d43192ca58dcd19f09004d496e641cb741e046ba5838102"} Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.081497 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.081473263 podStartE2EDuration="2.081473263s" podCreationTimestamp="2026-02-19 05:46:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:46:08.069163753 +0000 UTC m=+1264.102486352" watchObservedRunningTime="2026-02-19 05:46:08.081473263 +0000 UTC m=+1264.114795842" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.487449 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.487860 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.488356 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.488437 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.497266 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.499799 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.740702 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85446bf977-vzlgl"] Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.743186 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.772031 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85446bf977-vzlgl"] Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.880883 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-config\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.880952 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb4s2\" (UniqueName: \"kubernetes.io/projected/0ee4ae6f-65e3-4467-8302-54381eeebd5a-kube-api-access-nb4s2\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.881100 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-nb\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.881134 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-swift-storage-0\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.881156 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-svc\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.881219 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-sb\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.983663 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-nb\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.983751 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-swift-storage-0\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.983807 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-svc\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.983914 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-sb\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.984250 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-config\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.984333 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb4s2\" (UniqueName: \"kubernetes.io/projected/0ee4ae6f-65e3-4467-8302-54381eeebd5a-kube-api-access-nb4s2\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.985489 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-nb\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.985076 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-sb\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.985171 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-swift-storage-0\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.985361 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-config\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.985391 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-svc\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:09 crc kubenswrapper[5012]: I0219 05:46:09.012156 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb4s2\" (UniqueName: \"kubernetes.io/projected/0ee4ae6f-65e3-4467-8302-54381eeebd5a-kube-api-access-nb4s2\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:09 crc kubenswrapper[5012]: I0219 05:46:09.080285 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:09 crc kubenswrapper[5012]: I0219 05:46:09.661753 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85446bf977-vzlgl"] Feb 19 05:46:10 crc kubenswrapper[5012]: I0219 05:46:10.066611 5012 generic.go:334] "Generic (PLEG): container finished" podID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" containerID="26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488" exitCode=0 Feb 19 05:46:10 crc kubenswrapper[5012]: I0219 05:46:10.066666 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" event={"ID":"0ee4ae6f-65e3-4467-8302-54381eeebd5a","Type":"ContainerDied","Data":"26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488"} Feb 19 05:46:10 crc kubenswrapper[5012]: I0219 05:46:10.066932 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" event={"ID":"0ee4ae6f-65e3-4467-8302-54381eeebd5a","Type":"ContainerStarted","Data":"76c330a33b78602a2e427fa0cfc346da48f97fdaa4760b156caa3f21371da964"} Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.029840 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.030582 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="ceilometer-central-agent" containerID="cri-o://b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817" gracePeriod=30 Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.031554 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="proxy-httpd" containerID="cri-o://fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f" gracePeriod=30 Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.031616 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="ceilometer-notification-agent" containerID="cri-o://7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f" gracePeriod=30 Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.031651 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="sg-core" containerID="cri-o://84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846" gracePeriod=30 Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.040960 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.215:3000/\": EOF" Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.080773 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" event={"ID":"0ee4ae6f-65e3-4467-8302-54381eeebd5a","Type":"ContainerStarted","Data":"d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498"} Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.081048 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.114456 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" podStartSLOduration=3.114423969 podStartE2EDuration="3.114423969s" podCreationTimestamp="2026-02-19 05:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:46:11.099759912 +0000 UTC m=+1267.133082481" watchObservedRunningTime="2026-02-19 05:46:11.114423969 +0000 UTC m=+1267.147746578" Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.549935 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.550382 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-log" containerID="cri-o://e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721" gracePeriod=30 Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.550527 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-api" containerID="cri-o://7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a" gracePeriod=30 Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.740926 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.102581 5012 generic.go:334] "Generic (PLEG): container finished" podID="27805340-8269-4d8f-9183-b1cb339fea39" containerID="fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f" exitCode=0 Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.102610 5012 generic.go:334] "Generic (PLEG): container finished" podID="27805340-8269-4d8f-9183-b1cb339fea39" containerID="84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846" exitCode=2 Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.102622 5012 generic.go:334] "Generic (PLEG): container finished" podID="27805340-8269-4d8f-9183-b1cb339fea39" containerID="b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817" exitCode=0 Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.102686 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerDied","Data":"fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f"} Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.102735 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerDied","Data":"84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846"} Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.102751 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerDied","Data":"b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817"} Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.105338 5012 generic.go:334] "Generic (PLEG): container finished" podID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerID="e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721" exitCode=143 Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.105399 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06","Type":"ContainerDied","Data":"e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721"} Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.899545 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.963552 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-combined-ca-bundle\") pod \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.963672 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-config-data\") pod \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.963707 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9cwp\" (UniqueName: \"kubernetes.io/projected/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-kube-api-access-z9cwp\") pod \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.963863 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-logs\") pod \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.964381 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-logs" (OuterVolumeSpecName: "logs") pod "1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" (UID: "1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.970293 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-kube-api-access-z9cwp" (OuterVolumeSpecName: "kube-api-access-z9cwp") pod "1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" (UID: "1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06"). InnerVolumeSpecName "kube-api-access-z9cwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.001059 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-config-data" (OuterVolumeSpecName: "config-data") pod "1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" (UID: "1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.016682 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" (UID: "1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.066075 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9cwp\" (UniqueName: \"kubernetes.io/projected/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-kube-api-access-z9cwp\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.066247 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.066360 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.066463 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.116537 5012 generic.go:334] "Generic (PLEG): container finished" podID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerID="7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a" exitCode=0 Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.116577 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06","Type":"ContainerDied","Data":"7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a"} Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.116601 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06","Type":"ContainerDied","Data":"cf097bcda3c824344ff950b961502394eb3df8a35294e9a44c6a1c8d3d85a714"} Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.116602 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.116677 5012 scope.go:117] "RemoveContainer" containerID="7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.144521 5012 scope.go:117] "RemoveContainer" containerID="e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.187947 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.194069 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.220219 5012 scope.go:117] "RemoveContainer" containerID="7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.220416 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:13 crc kubenswrapper[5012]: E0219 05:46:13.220889 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-api" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.220903 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-api" Feb 19 05:46:13 crc kubenswrapper[5012]: E0219 05:46:13.220915 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-log" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.220922 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-log" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.221188 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-log" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.221242 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-api" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.222847 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.228327 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 05:46:13 crc kubenswrapper[5012]: E0219 05:46:13.228474 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a\": container with ID starting with 7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a not found: ID does not exist" containerID="7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.228513 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a"} err="failed to get container status \"7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a\": rpc error: code = NotFound desc = could not find container \"7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a\": container with ID starting with 7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a not found: ID does not exist" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.228540 5012 scope.go:117] "RemoveContainer" containerID="e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.228702 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.228996 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 05:46:13 crc kubenswrapper[5012]: E0219 05:46:13.242560 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721\": container with ID starting with e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721 not found: ID does not exist" containerID="e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.242637 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721"} err="failed to get container status \"e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721\": rpc error: code = NotFound desc = could not find container \"e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721\": container with ID starting with e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721 not found: ID does not exist" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.255454 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.391072 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-public-tls-certs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.391124 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc58982d-c141-4de8-bf5b-1669db2facb1-logs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.391177 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djdp5\" (UniqueName: \"kubernetes.io/projected/bc58982d-c141-4de8-bf5b-1669db2facb1-kube-api-access-djdp5\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.391582 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.391632 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.391940 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-config-data\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.494434 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc58982d-c141-4de8-bf5b-1669db2facb1-logs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.494908 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djdp5\" (UniqueName: \"kubernetes.io/projected/bc58982d-c141-4de8-bf5b-1669db2facb1-kube-api-access-djdp5\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.495109 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.495142 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.495134 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc58982d-c141-4de8-bf5b-1669db2facb1-logs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.495502 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-config-data\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.496086 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-public-tls-certs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.501077 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-config-data\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.501783 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.502076 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-public-tls-certs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.502904 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.516258 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djdp5\" (UniqueName: \"kubernetes.io/projected/bc58982d-c141-4de8-bf5b-1669db2facb1-kube-api-access-djdp5\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.556574 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:46:14 crc kubenswrapper[5012]: I0219 05:46:14.104710 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:14 crc kubenswrapper[5012]: I0219 05:46:14.130062 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc58982d-c141-4de8-bf5b-1669db2facb1","Type":"ContainerStarted","Data":"04fe4ed95fee83c7e2c8336e973329811054a21e11bbed885171027e8406c6c8"} Feb 19 05:46:14 crc kubenswrapper[5012]: I0219 05:46:14.431128 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:46:14 crc kubenswrapper[5012]: I0219 05:46:14.431617 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:46:14 crc kubenswrapper[5012]: I0219 05:46:14.723616 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" path="/var/lib/kubelet/pods/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06/volumes" Feb 19 05:46:14 crc kubenswrapper[5012]: I0219 05:46:14.965276 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.128052 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-log-httpd\") pod \"27805340-8269-4d8f-9183-b1cb339fea39\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.128182 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-config-data\") pod \"27805340-8269-4d8f-9183-b1cb339fea39\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.128504 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "27805340-8269-4d8f-9183-b1cb339fea39" (UID: "27805340-8269-4d8f-9183-b1cb339fea39"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.129133 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-combined-ca-bundle\") pod \"27805340-8269-4d8f-9183-b1cb339fea39\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.129250 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-run-httpd\") pod \"27805340-8269-4d8f-9183-b1cb339fea39\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.129405 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-scripts\") pod \"27805340-8269-4d8f-9183-b1cb339fea39\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.129551 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "27805340-8269-4d8f-9183-b1cb339fea39" (UID: "27805340-8269-4d8f-9183-b1cb339fea39"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.129563 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpktb\" (UniqueName: \"kubernetes.io/projected/27805340-8269-4d8f-9183-b1cb339fea39-kube-api-access-vpktb\") pod \"27805340-8269-4d8f-9183-b1cb339fea39\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.129638 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-ceilometer-tls-certs\") pod \"27805340-8269-4d8f-9183-b1cb339fea39\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.129686 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-sg-core-conf-yaml\") pod \"27805340-8269-4d8f-9183-b1cb339fea39\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.130486 5012 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.130508 5012 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.145939 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-scripts" (OuterVolumeSpecName: "scripts") pod "27805340-8269-4d8f-9183-b1cb339fea39" (UID: "27805340-8269-4d8f-9183-b1cb339fea39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.147286 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27805340-8269-4d8f-9183-b1cb339fea39-kube-api-access-vpktb" (OuterVolumeSpecName: "kube-api-access-vpktb") pod "27805340-8269-4d8f-9183-b1cb339fea39" (UID: "27805340-8269-4d8f-9183-b1cb339fea39"). InnerVolumeSpecName "kube-api-access-vpktb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.158559 5012 generic.go:334] "Generic (PLEG): container finished" podID="27805340-8269-4d8f-9183-b1cb339fea39" containerID="7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f" exitCode=0 Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.158624 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerDied","Data":"7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f"} Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.158653 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerDied","Data":"f47fd647e08594f727d5b6f46aed06b94634943796a315baee684b47c07fa5fe"} Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.158669 5012 scope.go:117] "RemoveContainer" containerID="fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.158776 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.167516 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "27805340-8269-4d8f-9183-b1cb339fea39" (UID: "27805340-8269-4d8f-9183-b1cb339fea39"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.173645 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc58982d-c141-4de8-bf5b-1669db2facb1","Type":"ContainerStarted","Data":"4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21"} Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.173698 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc58982d-c141-4de8-bf5b-1669db2facb1","Type":"ContainerStarted","Data":"898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705"} Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.205249 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.20523127 podStartE2EDuration="2.20523127s" podCreationTimestamp="2026-02-19 05:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:46:15.203875857 +0000 UTC m=+1271.237198426" watchObservedRunningTime="2026-02-19 05:46:15.20523127 +0000 UTC m=+1271.238553839" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.208585 5012 scope.go:117] "RemoveContainer" containerID="84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.225104 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "27805340-8269-4d8f-9183-b1cb339fea39" (UID: "27805340-8269-4d8f-9183-b1cb339fea39"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.232753 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.232783 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpktb\" (UniqueName: \"kubernetes.io/projected/27805340-8269-4d8f-9183-b1cb339fea39-kube-api-access-vpktb\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.232794 5012 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.232806 5012 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.251096 5012 scope.go:117] "RemoveContainer" containerID="7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.270819 5012 scope.go:117] "RemoveContainer" containerID="b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.273657 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27805340-8269-4d8f-9183-b1cb339fea39" (UID: "27805340-8269-4d8f-9183-b1cb339fea39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.294614 5012 scope.go:117] "RemoveContainer" containerID="fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f" Feb 19 05:46:15 crc kubenswrapper[5012]: E0219 05:46:15.295083 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f\": container with ID starting with fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f not found: ID does not exist" containerID="fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.295132 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f"} err="failed to get container status \"fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f\": rpc error: code = NotFound desc = could not find container \"fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f\": container with ID starting with fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f not found: ID does not exist" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.295161 5012 scope.go:117] "RemoveContainer" containerID="84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846" Feb 19 05:46:15 crc kubenswrapper[5012]: E0219 05:46:15.296825 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846\": container with ID starting with 84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846 not found: ID does not exist" containerID="84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.296871 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846"} err="failed to get container status \"84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846\": rpc error: code = NotFound desc = could not find container \"84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846\": container with ID starting with 84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846 not found: ID does not exist" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.296899 5012 scope.go:117] "RemoveContainer" containerID="7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f" Feb 19 05:46:15 crc kubenswrapper[5012]: E0219 05:46:15.297235 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f\": container with ID starting with 7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f not found: ID does not exist" containerID="7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.297281 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f"} err="failed to get container status \"7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f\": rpc error: code = NotFound desc = could not find container \"7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f\": container with ID starting with 7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f not found: ID does not exist" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.297297 5012 scope.go:117] "RemoveContainer" containerID="b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817" Feb 19 05:46:15 crc kubenswrapper[5012]: E0219 05:46:15.297666 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817\": container with ID starting with b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817 not found: ID does not exist" containerID="b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.297695 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817"} err="failed to get container status \"b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817\": rpc error: code = NotFound desc = could not find container \"b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817\": container with ID starting with b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817 not found: ID does not exist" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.299584 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-config-data" (OuterVolumeSpecName: "config-data") pod "27805340-8269-4d8f-9183-b1cb339fea39" (UID: "27805340-8269-4d8f-9183-b1cb339fea39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.335736 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.335790 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.567847 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.578462 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.596208 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:46:15 crc kubenswrapper[5012]: E0219 05:46:15.596657 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="proxy-httpd" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.596676 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="proxy-httpd" Feb 19 05:46:15 crc kubenswrapper[5012]: E0219 05:46:15.596702 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="ceilometer-notification-agent" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.596710 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="ceilometer-notification-agent" Feb 19 05:46:15 crc kubenswrapper[5012]: E0219 05:46:15.596730 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="sg-core" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.596739 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="sg-core" Feb 19 05:46:15 crc kubenswrapper[5012]: E0219 05:46:15.596752 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="ceilometer-central-agent" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.596761 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="ceilometer-central-agent" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.596975 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="proxy-httpd" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.596994 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="ceilometer-notification-agent" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.597014 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="sg-core" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.597037 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="ceilometer-central-agent" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.599424 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.603771 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.606864 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.607535 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.607835 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.767034 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.767130 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.767240 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-config-data\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.767288 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r57xv\" (UniqueName: \"kubernetes.io/projected/9647feae-5291-41e1-9bb4-631f661552b9-kube-api-access-r57xv\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.767417 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9647feae-5291-41e1-9bb4-631f661552b9-run-httpd\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.767465 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.767515 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9647feae-5291-41e1-9bb4-631f661552b9-log-httpd\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.767554 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-scripts\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.870832 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-config-data\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.870923 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r57xv\" (UniqueName: \"kubernetes.io/projected/9647feae-5291-41e1-9bb4-631f661552b9-kube-api-access-r57xv\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.871029 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9647feae-5291-41e1-9bb4-631f661552b9-run-httpd\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.871090 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.871159 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9647feae-5291-41e1-9bb4-631f661552b9-log-httpd\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.871206 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-scripts\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.871389 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.871460 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.871889 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9647feae-5291-41e1-9bb4-631f661552b9-run-httpd\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.873050 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9647feae-5291-41e1-9bb4-631f661552b9-log-httpd\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.877030 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.878125 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.878265 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-scripts\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.878808 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-config-data\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.886847 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.892671 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r57xv\" (UniqueName: \"kubernetes.io/projected/9647feae-5291-41e1-9bb4-631f661552b9-kube-api-access-r57xv\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.948965 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:46:16 crc kubenswrapper[5012]: I0219 05:46:16.453909 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:46:16 crc kubenswrapper[5012]: I0219 05:46:16.715999 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27805340-8269-4d8f-9183-b1cb339fea39" path="/var/lib/kubelet/pods/27805340-8269-4d8f-9183-b1cb339fea39/volumes" Feb 19 05:46:16 crc kubenswrapper[5012]: I0219 05:46:16.741246 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:16 crc kubenswrapper[5012]: I0219 05:46:16.771076 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.200804 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9647feae-5291-41e1-9bb4-631f661552b9","Type":"ContainerStarted","Data":"2af73644b5894679c07a4f93835a002f13a8829a9b46d3ef4965a8b10615c043"} Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.201217 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9647feae-5291-41e1-9bb4-631f661552b9","Type":"ContainerStarted","Data":"6bf60473bddc0fd9a12ac8d4f58b44cb413d42ee7a6d99476728a740e9353092"} Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.221028 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.422404 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-4t5r4"] Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.424458 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.430974 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.431079 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.450015 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4t5r4"] Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.608264 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-config-data\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.608435 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-scripts\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.608490 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64q96\" (UniqueName: \"kubernetes.io/projected/f597fc0f-7407-4f05-916c-70f7a3f145ec-kube-api-access-64q96\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.608808 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.710848 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.710926 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-config-data\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.710993 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-scripts\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.711044 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64q96\" (UniqueName: \"kubernetes.io/projected/f597fc0f-7407-4f05-916c-70f7a3f145ec-kube-api-access-64q96\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.716129 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-scripts\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.717989 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.718429 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-config-data\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.750978 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64q96\" (UniqueName: \"kubernetes.io/projected/f597fc0f-7407-4f05-916c-70f7a3f145ec-kube-api-access-64q96\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:18 crc kubenswrapper[5012]: I0219 05:46:18.043824 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:18 crc kubenswrapper[5012]: I0219 05:46:18.226437 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9647feae-5291-41e1-9bb4-631f661552b9","Type":"ContainerStarted","Data":"2d58e4435a956762307e0d481120cafc3d3b0586b5c958e51b334e3a95d2d854"} Feb 19 05:46:18 crc kubenswrapper[5012]: I0219 05:46:18.226714 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9647feae-5291-41e1-9bb4-631f661552b9","Type":"ContainerStarted","Data":"56ca39e0fad37f61034c8dc94c678a7eaebfbe61eae9b4579509b772bbd7ca90"} Feb 19 05:46:18 crc kubenswrapper[5012]: I0219 05:46:18.536986 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4t5r4"] Feb 19 05:46:18 crc kubenswrapper[5012]: W0219 05:46:18.541129 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf597fc0f_7407_4f05_916c_70f7a3f145ec.slice/crio-e184420ddf676dc7da68164f21f80644f8c58c192add937a650994fd6d15c6b3 WatchSource:0}: Error finding container e184420ddf676dc7da68164f21f80644f8c58c192add937a650994fd6d15c6b3: Status 404 returned error can't find the container with id e184420ddf676dc7da68164f21f80644f8c58c192add937a650994fd6d15c6b3 Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.082538 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.175400 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647496cc8f-4z5vx"] Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.175954 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" podUID="c1589f54-6631-4004-b2a9-e253b43b0644" containerName="dnsmasq-dns" containerID="cri-o://401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb" gracePeriod=10 Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.252807 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4t5r4" event={"ID":"f597fc0f-7407-4f05-916c-70f7a3f145ec","Type":"ContainerStarted","Data":"9d9ddb4f57f745aaa08f8b6e7a9a59d578aaf776b154c4fb9be135f3d48b048d"} Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.252847 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4t5r4" event={"ID":"f597fc0f-7407-4f05-916c-70f7a3f145ec","Type":"ContainerStarted","Data":"e184420ddf676dc7da68164f21f80644f8c58c192add937a650994fd6d15c6b3"} Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.280752 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-4t5r4" podStartSLOduration=2.280723738 podStartE2EDuration="2.280723738s" podCreationTimestamp="2026-02-19 05:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:46:19.277854688 +0000 UTC m=+1275.311177257" watchObservedRunningTime="2026-02-19 05:46:19.280723738 +0000 UTC m=+1275.314046307" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.653730 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.763050 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-svc\") pod \"c1589f54-6631-4004-b2a9-e253b43b0644\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.763165 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-sb\") pod \"c1589f54-6631-4004-b2a9-e253b43b0644\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.763280 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-swift-storage-0\") pod \"c1589f54-6631-4004-b2a9-e253b43b0644\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.763390 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfxlj\" (UniqueName: \"kubernetes.io/projected/c1589f54-6631-4004-b2a9-e253b43b0644-kube-api-access-mfxlj\") pod \"c1589f54-6631-4004-b2a9-e253b43b0644\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.763458 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-config\") pod \"c1589f54-6631-4004-b2a9-e253b43b0644\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.763495 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-nb\") pod \"c1589f54-6631-4004-b2a9-e253b43b0644\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.787535 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1589f54-6631-4004-b2a9-e253b43b0644-kube-api-access-mfxlj" (OuterVolumeSpecName: "kube-api-access-mfxlj") pod "c1589f54-6631-4004-b2a9-e253b43b0644" (UID: "c1589f54-6631-4004-b2a9-e253b43b0644"). InnerVolumeSpecName "kube-api-access-mfxlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.823252 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-config" (OuterVolumeSpecName: "config") pod "c1589f54-6631-4004-b2a9-e253b43b0644" (UID: "c1589f54-6631-4004-b2a9-e253b43b0644"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.829921 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c1589f54-6631-4004-b2a9-e253b43b0644" (UID: "c1589f54-6631-4004-b2a9-e253b43b0644"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.834887 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c1589f54-6631-4004-b2a9-e253b43b0644" (UID: "c1589f54-6631-4004-b2a9-e253b43b0644"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.839692 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1589f54-6631-4004-b2a9-e253b43b0644" (UID: "c1589f54-6631-4004-b2a9-e253b43b0644"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.841341 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c1589f54-6631-4004-b2a9-e253b43b0644" (UID: "c1589f54-6631-4004-b2a9-e253b43b0644"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.867150 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.867340 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.867428 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.867533 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfxlj\" (UniqueName: \"kubernetes.io/projected/c1589f54-6631-4004-b2a9-e253b43b0644-kube-api-access-mfxlj\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.867612 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.867683 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.272387 5012 generic.go:334] "Generic (PLEG): container finished" podID="c1589f54-6631-4004-b2a9-e253b43b0644" containerID="401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb" exitCode=0 Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.272586 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" event={"ID":"c1589f54-6631-4004-b2a9-e253b43b0644","Type":"ContainerDied","Data":"401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb"} Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.272938 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" event={"ID":"c1589f54-6631-4004-b2a9-e253b43b0644","Type":"ContainerDied","Data":"8a23cad7dbe6ef631f80ea11b62d7b988e6b72ef836fd0ba728b4bc06cb53bf4"} Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.272967 5012 scope.go:117] "RemoveContainer" containerID="401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.272696 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.282792 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9647feae-5291-41e1-9bb4-631f661552b9","Type":"ContainerStarted","Data":"0c23692d5ed1b1882f1b396df4e1f7cb0268dc37efd4bd4b5d74511691a797bc"} Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.326930 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.854640061 podStartE2EDuration="5.32690087s" podCreationTimestamp="2026-02-19 05:46:15 +0000 UTC" firstStartedPulling="2026-02-19 05:46:16.451553413 +0000 UTC m=+1272.484875982" lastFinishedPulling="2026-02-19 05:46:18.923814212 +0000 UTC m=+1274.957136791" observedRunningTime="2026-02-19 05:46:20.306851262 +0000 UTC m=+1276.340173841" watchObservedRunningTime="2026-02-19 05:46:20.32690087 +0000 UTC m=+1276.360223479" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.327406 5012 scope.go:117] "RemoveContainer" containerID="ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.360934 5012 scope.go:117] "RemoveContainer" containerID="401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.362852 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647496cc8f-4z5vx"] Feb 19 05:46:20 crc kubenswrapper[5012]: E0219 05:46:20.368454 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb\": container with ID starting with 401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb not found: ID does not exist" containerID="401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.368556 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb"} err="failed to get container status \"401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb\": rpc error: code = NotFound desc = could not find container \"401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb\": container with ID starting with 401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb not found: ID does not exist" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.368641 5012 scope.go:117] "RemoveContainer" containerID="ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8" Feb 19 05:46:20 crc kubenswrapper[5012]: E0219 05:46:20.370440 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8\": container with ID starting with ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8 not found: ID does not exist" containerID="ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.370620 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8"} err="failed to get container status \"ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8\": rpc error: code = NotFound desc = could not find container \"ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8\": container with ID starting with ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8 not found: ID does not exist" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.372142 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647496cc8f-4z5vx"] Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.720452 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1589f54-6631-4004-b2a9-e253b43b0644" path="/var/lib/kubelet/pods/c1589f54-6631-4004-b2a9-e253b43b0644/volumes" Feb 19 05:46:21 crc kubenswrapper[5012]: I0219 05:46:21.297438 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 05:46:23 crc kubenswrapper[5012]: I0219 05:46:23.557858 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 05:46:23 crc kubenswrapper[5012]: I0219 05:46:23.559743 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 05:46:24 crc kubenswrapper[5012]: I0219 05:46:24.335414 5012 generic.go:334] "Generic (PLEG): container finished" podID="f597fc0f-7407-4f05-916c-70f7a3f145ec" containerID="9d9ddb4f57f745aaa08f8b6e7a9a59d578aaf776b154c4fb9be135f3d48b048d" exitCode=0 Feb 19 05:46:24 crc kubenswrapper[5012]: I0219 05:46:24.336894 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4t5r4" event={"ID":"f597fc0f-7407-4f05-916c-70f7a3f145ec","Type":"ContainerDied","Data":"9d9ddb4f57f745aaa08f8b6e7a9a59d578aaf776b154c4fb9be135f3d48b048d"} Feb 19 05:46:24 crc kubenswrapper[5012]: I0219 05:46:24.563600 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 05:46:24 crc kubenswrapper[5012]: I0219 05:46:24.567552 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 05:46:25 crc kubenswrapper[5012]: I0219 05:46:25.837902 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.034087 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64q96\" (UniqueName: \"kubernetes.io/projected/f597fc0f-7407-4f05-916c-70f7a3f145ec-kube-api-access-64q96\") pod \"f597fc0f-7407-4f05-916c-70f7a3f145ec\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.034214 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-combined-ca-bundle\") pod \"f597fc0f-7407-4f05-916c-70f7a3f145ec\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.034265 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-scripts\") pod \"f597fc0f-7407-4f05-916c-70f7a3f145ec\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.034497 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-config-data\") pod \"f597fc0f-7407-4f05-916c-70f7a3f145ec\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.041586 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f597fc0f-7407-4f05-916c-70f7a3f145ec-kube-api-access-64q96" (OuterVolumeSpecName: "kube-api-access-64q96") pod "f597fc0f-7407-4f05-916c-70f7a3f145ec" (UID: "f597fc0f-7407-4f05-916c-70f7a3f145ec"). InnerVolumeSpecName "kube-api-access-64q96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.041914 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-scripts" (OuterVolumeSpecName: "scripts") pod "f597fc0f-7407-4f05-916c-70f7a3f145ec" (UID: "f597fc0f-7407-4f05-916c-70f7a3f145ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.068491 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f597fc0f-7407-4f05-916c-70f7a3f145ec" (UID: "f597fc0f-7407-4f05-916c-70f7a3f145ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.071221 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-config-data" (OuterVolumeSpecName: "config-data") pod "f597fc0f-7407-4f05-916c-70f7a3f145ec" (UID: "f597fc0f-7407-4f05-916c-70f7a3f145ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.136991 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64q96\" (UniqueName: \"kubernetes.io/projected/f597fc0f-7407-4f05-916c-70f7a3f145ec-kube-api-access-64q96\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.137053 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.137070 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.137082 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.359180 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4t5r4" event={"ID":"f597fc0f-7407-4f05-916c-70f7a3f145ec","Type":"ContainerDied","Data":"e184420ddf676dc7da68164f21f80644f8c58c192add937a650994fd6d15c6b3"} Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.359401 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e184420ddf676dc7da68164f21f80644f8c58c192add937a650994fd6d15c6b3" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.359258 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.569052 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.569530 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-log" containerID="cri-o://898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705" gracePeriod=30 Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.569609 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-api" containerID="cri-o://4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21" gracePeriod=30 Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.578415 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.578621 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="96352ff3-accb-4fd1-8fa4-eec10f340eaf" containerName="nova-scheduler-scheduler" containerID="cri-o://2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c" gracePeriod=30 Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.674188 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.674794 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-log" containerID="cri-o://6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b" gracePeriod=30 Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.674865 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-metadata" containerID="cri-o://ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3" gracePeriod=30 Feb 19 05:46:27 crc kubenswrapper[5012]: I0219 05:46:27.371409 5012 generic.go:334] "Generic (PLEG): container finished" podID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerID="6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b" exitCode=143 Feb 19 05:46:27 crc kubenswrapper[5012]: I0219 05:46:27.371468 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b","Type":"ContainerDied","Data":"6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b"} Feb 19 05:46:27 crc kubenswrapper[5012]: I0219 05:46:27.374679 5012 generic.go:334] "Generic (PLEG): container finished" podID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerID="898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705" exitCode=143 Feb 19 05:46:27 crc kubenswrapper[5012]: I0219 05:46:27.374712 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc58982d-c141-4de8-bf5b-1669db2facb1","Type":"ContainerDied","Data":"898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705"} Feb 19 05:46:27 crc kubenswrapper[5012]: I0219 05:46:27.984027 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.077679 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-public-tls-certs\") pod \"bc58982d-c141-4de8-bf5b-1669db2facb1\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.077734 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-config-data\") pod \"bc58982d-c141-4de8-bf5b-1669db2facb1\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.077782 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djdp5\" (UniqueName: \"kubernetes.io/projected/bc58982d-c141-4de8-bf5b-1669db2facb1-kube-api-access-djdp5\") pod \"bc58982d-c141-4de8-bf5b-1669db2facb1\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.077830 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-combined-ca-bundle\") pod \"bc58982d-c141-4de8-bf5b-1669db2facb1\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.077881 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc58982d-c141-4de8-bf5b-1669db2facb1-logs\") pod \"bc58982d-c141-4de8-bf5b-1669db2facb1\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.077961 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-internal-tls-certs\") pod \"bc58982d-c141-4de8-bf5b-1669db2facb1\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.084053 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc58982d-c141-4de8-bf5b-1669db2facb1-kube-api-access-djdp5" (OuterVolumeSpecName: "kube-api-access-djdp5") pod "bc58982d-c141-4de8-bf5b-1669db2facb1" (UID: "bc58982d-c141-4de8-bf5b-1669db2facb1"). InnerVolumeSpecName "kube-api-access-djdp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.086198 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc58982d-c141-4de8-bf5b-1669db2facb1-logs" (OuterVolumeSpecName: "logs") pod "bc58982d-c141-4de8-bf5b-1669db2facb1" (UID: "bc58982d-c141-4de8-bf5b-1669db2facb1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.104460 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-config-data" (OuterVolumeSpecName: "config-data") pod "bc58982d-c141-4de8-bf5b-1669db2facb1" (UID: "bc58982d-c141-4de8-bf5b-1669db2facb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.126924 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc58982d-c141-4de8-bf5b-1669db2facb1" (UID: "bc58982d-c141-4de8-bf5b-1669db2facb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.134923 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bc58982d-c141-4de8-bf5b-1669db2facb1" (UID: "bc58982d-c141-4de8-bf5b-1669db2facb1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.137665 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bc58982d-c141-4de8-bf5b-1669db2facb1" (UID: "bc58982d-c141-4de8-bf5b-1669db2facb1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.166356 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.184234 5012 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.184259 5012 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.184269 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.184279 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djdp5\" (UniqueName: \"kubernetes.io/projected/bc58982d-c141-4de8-bf5b-1669db2facb1-kube-api-access-djdp5\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.184322 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.184330 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc58982d-c141-4de8-bf5b-1669db2facb1-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.286049 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9rh7\" (UniqueName: \"kubernetes.io/projected/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-kube-api-access-q9rh7\") pod \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.286145 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-nova-metadata-tls-certs\") pod \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.286250 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-combined-ca-bundle\") pod \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.286335 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-config-data\") pod \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.286371 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-logs\") pod \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.287001 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-logs" (OuterVolumeSpecName: "logs") pod "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" (UID: "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.292449 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-kube-api-access-q9rh7" (OuterVolumeSpecName: "kube-api-access-q9rh7") pod "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" (UID: "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b"). InnerVolumeSpecName "kube-api-access-q9rh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.309236 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" (UID: "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.317272 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-config-data" (OuterVolumeSpecName: "config-data") pod "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" (UID: "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.347901 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" (UID: "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.384380 5012 generic.go:334] "Generic (PLEG): container finished" podID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerID="4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21" exitCode=0 Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.385201 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc58982d-c141-4de8-bf5b-1669db2facb1","Type":"ContainerDied","Data":"4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21"} Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.385398 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc58982d-c141-4de8-bf5b-1669db2facb1","Type":"ContainerDied","Data":"04fe4ed95fee83c7e2c8336e973329811054a21e11bbed885171027e8406c6c8"} Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.385465 5012 scope.go:117] "RemoveContainer" containerID="4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.385541 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.387477 5012 generic.go:334] "Generic (PLEG): container finished" podID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerID="ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3" exitCode=0 Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.387521 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b","Type":"ContainerDied","Data":"ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3"} Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.387549 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b","Type":"ContainerDied","Data":"d5151fe8a2179cf3ec35bc35e025b6f051659ce0400cbb15c53f153c34909628"} Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.387608 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.388723 5012 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.388741 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.388750 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.388760 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.388769 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9rh7\" (UniqueName: \"kubernetes.io/projected/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-kube-api-access-q9rh7\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.418980 5012 scope.go:117] "RemoveContainer" containerID="898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.453528 5012 scope.go:117] "RemoveContainer" containerID="4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.454469 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21\": container with ID starting with 4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21 not found: ID does not exist" containerID="4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.457340 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21"} err="failed to get container status \"4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21\": rpc error: code = NotFound desc = could not find container \"4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21\": container with ID starting with 4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21 not found: ID does not exist" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.457453 5012 scope.go:117] "RemoveContainer" containerID="898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.457963 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705\": container with ID starting with 898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705 not found: ID does not exist" containerID="898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.458039 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705"} err="failed to get container status \"898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705\": rpc error: code = NotFound desc = could not find container \"898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705\": container with ID starting with 898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705 not found: ID does not exist" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.458123 5012 scope.go:117] "RemoveContainer" containerID="ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.466091 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.492218 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.506600 5012 scope.go:117] "RemoveContainer" containerID="6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.513409 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.522746 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.524051 5012 scope.go:117] "RemoveContainer" containerID="ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.524997 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3\": container with ID starting with ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3 not found: ID does not exist" containerID="ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.525049 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3"} err="failed to get container status \"ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3\": rpc error: code = NotFound desc = could not find container \"ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3\": container with ID starting with ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3 not found: ID does not exist" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.525077 5012 scope.go:117] "RemoveContainer" containerID="6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.525421 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b\": container with ID starting with 6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b not found: ID does not exist" containerID="6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.525441 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b"} err="failed to get container status \"6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b\": rpc error: code = NotFound desc = could not find container \"6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b\": container with ID starting with 6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b not found: ID does not exist" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.531707 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.532182 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1589f54-6631-4004-b2a9-e253b43b0644" containerName="dnsmasq-dns" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532198 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1589f54-6631-4004-b2a9-e253b43b0644" containerName="dnsmasq-dns" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.532209 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1589f54-6631-4004-b2a9-e253b43b0644" containerName="init" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532215 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1589f54-6631-4004-b2a9-e253b43b0644" containerName="init" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.532228 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-log" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532234 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-log" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.532244 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-api" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532250 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-api" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.532261 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-metadata" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532267 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-metadata" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.532280 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f597fc0f-7407-4f05-916c-70f7a3f145ec" containerName="nova-manage" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532285 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f597fc0f-7407-4f05-916c-70f7a3f145ec" containerName="nova-manage" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.532331 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-log" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532337 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-log" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532507 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1589f54-6631-4004-b2a9-e253b43b0644" containerName="dnsmasq-dns" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532520 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-api" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532533 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-log" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532546 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-metadata" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532560 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f597fc0f-7407-4f05-916c-70f7a3f145ec" containerName="nova-manage" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532572 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-log" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.533633 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.535841 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.535882 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.536395 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.555632 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.557287 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.560889 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.561071 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.573857 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.582894 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.601058 5012 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc58982d_c141_4de8_bf5b_1669db2facb1.slice/crio-04fe4ed95fee83c7e2c8336e973329811054a21e11bbed885171027e8406c6c8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc58982d_c141_4de8_bf5b_1669db2facb1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dbca55d_fe7e_4a74_a25c_8c495eb29e3b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dbca55d_fe7e_4a74_a25c_8c495eb29e3b.slice/crio-d5151fe8a2179cf3ec35bc35e025b6f051659ce0400cbb15c53f153c34909628\": RecentStats: unable to find data in memory cache]" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.697861 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.697951 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-config-data\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698026 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/396b18f9-9859-4b42-aca1-c29c3724c86c-logs\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698108 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698150 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxxbt\" (UniqueName: \"kubernetes.io/projected/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-kube-api-access-jxxbt\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698192 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698254 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-public-tls-certs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698286 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrjn\" (UniqueName: \"kubernetes.io/projected/396b18f9-9859-4b42-aca1-c29c3724c86c-kube-api-access-7xrjn\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698386 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-config-data\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698417 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-logs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698459 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.713364 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" path="/var/lib/kubelet/pods/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b/volumes" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.714226 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" path="/var/lib/kubelet/pods/bc58982d-c141-4de8-bf5b-1669db2facb1/volumes" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.799730 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800073 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-config-data\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800144 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/396b18f9-9859-4b42-aca1-c29c3724c86c-logs\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800234 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800283 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxxbt\" (UniqueName: \"kubernetes.io/projected/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-kube-api-access-jxxbt\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800334 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800379 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-public-tls-certs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800400 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xrjn\" (UniqueName: \"kubernetes.io/projected/396b18f9-9859-4b42-aca1-c29c3724c86c-kube-api-access-7xrjn\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800474 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-config-data\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800496 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-logs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800526 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800902 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/396b18f9-9859-4b42-aca1-c29c3724c86c-logs\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.801932 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-logs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.803677 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.806289 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.806607 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-config-data\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.807029 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.812489 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-config-data\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.813990 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.818478 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxxbt\" (UniqueName: \"kubernetes.io/projected/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-kube-api-access-jxxbt\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.818579 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-public-tls-certs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.823595 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xrjn\" (UniqueName: \"kubernetes.io/projected/396b18f9-9859-4b42-aca1-c29c3724c86c-kube-api-access-7xrjn\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.855991 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.876448 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.209461 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.310839 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj24n\" (UniqueName: \"kubernetes.io/projected/96352ff3-accb-4fd1-8fa4-eec10f340eaf-kube-api-access-gj24n\") pod \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.310907 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-combined-ca-bundle\") pod \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.311145 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-config-data\") pod \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.316520 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96352ff3-accb-4fd1-8fa4-eec10f340eaf-kube-api-access-gj24n" (OuterVolumeSpecName: "kube-api-access-gj24n") pod "96352ff3-accb-4fd1-8fa4-eec10f340eaf" (UID: "96352ff3-accb-4fd1-8fa4-eec10f340eaf"). InnerVolumeSpecName "kube-api-access-gj24n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.346462 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.354645 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96352ff3-accb-4fd1-8fa4-eec10f340eaf" (UID: "96352ff3-accb-4fd1-8fa4-eec10f340eaf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.364902 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-config-data" (OuterVolumeSpecName: "config-data") pod "96352ff3-accb-4fd1-8fa4-eec10f340eaf" (UID: "96352ff3-accb-4fd1-8fa4-eec10f340eaf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.402348 5012 generic.go:334] "Generic (PLEG): container finished" podID="96352ff3-accb-4fd1-8fa4-eec10f340eaf" containerID="2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c" exitCode=0 Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.402521 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.403930 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96352ff3-accb-4fd1-8fa4-eec10f340eaf","Type":"ContainerDied","Data":"2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c"} Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.403982 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96352ff3-accb-4fd1-8fa4-eec10f340eaf","Type":"ContainerDied","Data":"0efe32e1979dd31ba39916aaeb5dec48666e72ade534d761ecc6b79a3666cbef"} Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.404000 5012 scope.go:117] "RemoveContainer" containerID="2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.409983 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1a529b0-65f7-4680-a4fd-4dacebc1ab83","Type":"ContainerStarted","Data":"048c66c098b7f9e8eaec85a26f1504617a2fd828fc11f95817a7669416de03c9"} Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.413466 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.413502 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj24n\" (UniqueName: \"kubernetes.io/projected/96352ff3-accb-4fd1-8fa4-eec10f340eaf-kube-api-access-gj24n\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.413516 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.430430 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.454456 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.461423 5012 scope.go:117] "RemoveContainer" containerID="2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.472093 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:46:29 crc kubenswrapper[5012]: E0219 05:46:29.473061 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c\": container with ID starting with 2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c not found: ID does not exist" containerID="2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.473102 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c"} err="failed to get container status \"2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c\": rpc error: code = NotFound desc = could not find container \"2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c\": container with ID starting with 2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c not found: ID does not exist" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.495494 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:46:29 crc kubenswrapper[5012]: E0219 05:46:29.496167 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96352ff3-accb-4fd1-8fa4-eec10f340eaf" containerName="nova-scheduler-scheduler" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.496198 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="96352ff3-accb-4fd1-8fa4-eec10f340eaf" containerName="nova-scheduler-scheduler" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.496598 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="96352ff3-accb-4fd1-8fa4-eec10f340eaf" containerName="nova-scheduler-scheduler" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.497650 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.497780 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.500658 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.618204 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-config-data\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.618261 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzxb2\" (UniqueName: \"kubernetes.io/projected/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-kube-api-access-mzxb2\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.618370 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.720589 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-config-data\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.720653 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzxb2\" (UniqueName: \"kubernetes.io/projected/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-kube-api-access-mzxb2\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.720756 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.724180 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.724225 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-config-data\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.736169 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzxb2\" (UniqueName: \"kubernetes.io/projected/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-kube-api-access-mzxb2\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.819898 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.281647 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:46:30 crc kubenswrapper[5012]: W0219 05:46:30.287458 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cfb0ed7_fe80_4d03_9ecb_31587c57bfd0.slice/crio-203b529037ccc9aaf68ea8bdc0675c8a399762c363d09de08c57c745a8caf5b4 WatchSource:0}: Error finding container 203b529037ccc9aaf68ea8bdc0675c8a399762c363d09de08c57c745a8caf5b4: Status 404 returned error can't find the container with id 203b529037ccc9aaf68ea8bdc0675c8a399762c363d09de08c57c745a8caf5b4 Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.422897 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0","Type":"ContainerStarted","Data":"203b529037ccc9aaf68ea8bdc0675c8a399762c363d09de08c57c745a8caf5b4"} Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.425077 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"396b18f9-9859-4b42-aca1-c29c3724c86c","Type":"ContainerStarted","Data":"3520479af12ca532f031d29fd0e70688ec7f4a71074814c2a5e54db9b37ba120"} Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.425100 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"396b18f9-9859-4b42-aca1-c29c3724c86c","Type":"ContainerStarted","Data":"c7d0760ee03878273f710fd278cc50691fe286371da17a9f060d59bf0f4c14f6"} Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.425111 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"396b18f9-9859-4b42-aca1-c29c3724c86c","Type":"ContainerStarted","Data":"92a0dc095e141f2fdc5820ca1127aeb4996f8e385984fc4a71f51fff6af9276b"} Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.427350 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1a529b0-65f7-4680-a4fd-4dacebc1ab83","Type":"ContainerStarted","Data":"85ec488a97bf35b5601f9415104f939a31d0ee0fa34c9f6050e5220eb8811b30"} Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.427371 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1a529b0-65f7-4680-a4fd-4dacebc1ab83","Type":"ContainerStarted","Data":"7184e48a5fb705cafe26540d60e9f3e9e43e603bf3267a9e7be7910ee208ba84"} Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.458807 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.458791138 podStartE2EDuration="2.458791138s" podCreationTimestamp="2026-02-19 05:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:46:30.452374422 +0000 UTC m=+1286.485697011" watchObservedRunningTime="2026-02-19 05:46:30.458791138 +0000 UTC m=+1286.492113707" Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.488674 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.488654635 podStartE2EDuration="2.488654635s" podCreationTimestamp="2026-02-19 05:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:46:30.480928917 +0000 UTC m=+1286.514251486" watchObservedRunningTime="2026-02-19 05:46:30.488654635 +0000 UTC m=+1286.521977224" Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.715399 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96352ff3-accb-4fd1-8fa4-eec10f340eaf" path="/var/lib/kubelet/pods/96352ff3-accb-4fd1-8fa4-eec10f340eaf/volumes" Feb 19 05:46:31 crc kubenswrapper[5012]: I0219 05:46:31.467121 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0","Type":"ContainerStarted","Data":"e600cee030fb526a93db53704118d775b59807400b433d63578aec2beb4b9ff5"} Feb 19 05:46:31 crc kubenswrapper[5012]: I0219 05:46:31.496068 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.496039102 podStartE2EDuration="2.496039102s" podCreationTimestamp="2026-02-19 05:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:46:31.493960972 +0000 UTC m=+1287.527283611" watchObservedRunningTime="2026-02-19 05:46:31.496039102 +0000 UTC m=+1287.529361711" Feb 19 05:46:33 crc kubenswrapper[5012]: I0219 05:46:33.877168 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 05:46:33 crc kubenswrapper[5012]: I0219 05:46:33.878478 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 05:46:34 crc kubenswrapper[5012]: I0219 05:46:34.820262 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 05:46:38 crc kubenswrapper[5012]: I0219 05:46:38.856242 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 05:46:38 crc kubenswrapper[5012]: I0219 05:46:38.858782 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 05:46:38 crc kubenswrapper[5012]: I0219 05:46:38.877555 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 05:46:38 crc kubenswrapper[5012]: I0219 05:46:38.877612 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 05:46:39 crc kubenswrapper[5012]: I0219 05:46:39.821225 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 05:46:39 crc kubenswrapper[5012]: I0219 05:46:39.869446 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 05:46:39 crc kubenswrapper[5012]: I0219 05:46:39.875262 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c1a529b0-65f7-4680-a4fd-4dacebc1ab83" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 05:46:39 crc kubenswrapper[5012]: I0219 05:46:39.875843 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c1a529b0-65f7-4680-a4fd-4dacebc1ab83" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 05:46:39 crc kubenswrapper[5012]: I0219 05:46:39.897512 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="396b18f9-9859-4b42-aca1-c29c3724c86c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 05:46:39 crc kubenswrapper[5012]: I0219 05:46:39.897909 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="396b18f9-9859-4b42-aca1-c29c3724c86c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 05:46:40 crc kubenswrapper[5012]: I0219 05:46:40.651284 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 05:46:44 crc kubenswrapper[5012]: I0219 05:46:44.430893 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:46:44 crc kubenswrapper[5012]: I0219 05:46:44.431003 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:46:44 crc kubenswrapper[5012]: I0219 05:46:44.431071 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:46:44 crc kubenswrapper[5012]: I0219 05:46:44.432152 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6721017012e745bfd497807b3e0766cbf7c779446215cbbe94491f729f86c6ac"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:46:44 crc kubenswrapper[5012]: I0219 05:46:44.432257 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://6721017012e745bfd497807b3e0766cbf7c779446215cbbe94491f729f86c6ac" gracePeriod=600 Feb 19 05:46:44 crc kubenswrapper[5012]: I0219 05:46:44.661599 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="6721017012e745bfd497807b3e0766cbf7c779446215cbbe94491f729f86c6ac" exitCode=0 Feb 19 05:46:44 crc kubenswrapper[5012]: I0219 05:46:44.661712 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"6721017012e745bfd497807b3e0766cbf7c779446215cbbe94491f729f86c6ac"} Feb 19 05:46:44 crc kubenswrapper[5012]: I0219 05:46:44.662112 5012 scope.go:117] "RemoveContainer" containerID="0209690f43a6b6283a91e933f5b897e5259f5fced0261c8b5238e804ce206915" Feb 19 05:46:45 crc kubenswrapper[5012]: I0219 05:46:45.678998 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42"} Feb 19 05:46:45 crc kubenswrapper[5012]: I0219 05:46:45.964675 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 05:46:48 crc kubenswrapper[5012]: I0219 05:46:48.879170 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 05:46:48 crc kubenswrapper[5012]: I0219 05:46:48.879971 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 05:46:49 crc kubenswrapper[5012]: I0219 05:46:49.019898 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 05:46:49 crc kubenswrapper[5012]: I0219 05:46:49.024475 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 05:46:49 crc kubenswrapper[5012]: I0219 05:46:49.027525 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 05:46:49 crc kubenswrapper[5012]: I0219 05:46:49.030703 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 05:46:49 crc kubenswrapper[5012]: I0219 05:46:49.032905 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 05:46:49 crc kubenswrapper[5012]: I0219 05:46:49.736401 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 05:46:49 crc kubenswrapper[5012]: I0219 05:46:49.741611 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 05:46:49 crc kubenswrapper[5012]: I0219 05:46:49.750861 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 05:46:57 crc kubenswrapper[5012]: I0219 05:46:57.362111 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:46:58 crc kubenswrapper[5012]: I0219 05:46:58.345319 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:47:00 crc kubenswrapper[5012]: I0219 05:47:00.793225 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b0095712-262e-4562-afac-0f2f4372224d" containerName="rabbitmq" containerID="cri-o://dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32" gracePeriod=604797 Feb 19 05:47:00 crc kubenswrapper[5012]: I0219 05:47:00.988759 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b0095712-262e-4562-afac-0f2f4372224d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 19 05:47:01 crc kubenswrapper[5012]: I0219 05:47:01.881964 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" containerName="rabbitmq" containerID="cri-o://0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7" gracePeriod=604797 Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.506451 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.559260 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0095712-262e-4562-afac-0f2f4372224d-pod-info\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.559784 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-plugins-conf\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.559830 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-tls\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.559969 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-config-data\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.560109 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-plugins\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.560137 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8phq\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-kube-api-access-b8phq\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.560619 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0095712-262e-4562-afac-0f2f4372224d-erlang-cookie-secret\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.560697 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-erlang-cookie\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.560727 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-confd\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.560765 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.560794 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-server-conf\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.560839 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.562293 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.562867 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.574655 5012 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.574697 5012 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.574715 5012 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.601055 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0095712-262e-4562-afac-0f2f4372224d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.612237 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b0095712-262e-4562-afac-0f2f4372224d-pod-info" (OuterVolumeSpecName: "pod-info") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.617610 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.628614 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-kube-api-access-b8phq" (OuterVolumeSpecName: "kube-api-access-b8phq") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "kube-api-access-b8phq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.630997 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-config-data" (OuterVolumeSpecName: "config-data") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.633971 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.655588 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-server-conf" (OuterVolumeSpecName: "server-conf") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.677124 5012 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0095712-262e-4562-afac-0f2f4372224d-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.677158 5012 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.677169 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.677179 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8phq\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-kube-api-access-b8phq\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.677189 5012 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0095712-262e-4562-afac-0f2f4372224d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.677197 5012 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.677221 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.711189 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.732046 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.779892 5012 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.779924 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.898252 5012 generic.go:334] "Generic (PLEG): container finished" podID="b0095712-262e-4562-afac-0f2f4372224d" containerID="dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32" exitCode=0 Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.898293 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0095712-262e-4562-afac-0f2f4372224d","Type":"ContainerDied","Data":"dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32"} Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.898352 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0095712-262e-4562-afac-0f2f4372224d","Type":"ContainerDied","Data":"a9ce4884d01424dd045dfa7d8118a6965b35bc2fd9ba564b1b28a67e56e88f01"} Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.898371 5012 scope.go:117] "RemoveContainer" containerID="dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.898526 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.935564 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.940677 5012 scope.go:117] "RemoveContainer" containerID="1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.960391 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.969046 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:47:02 crc kubenswrapper[5012]: E0219 05:47:02.969656 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0095712-262e-4562-afac-0f2f4372224d" containerName="setup-container" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.969677 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0095712-262e-4562-afac-0f2f4372224d" containerName="setup-container" Feb 19 05:47:02 crc kubenswrapper[5012]: E0219 05:47:02.969718 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0095712-262e-4562-afac-0f2f4372224d" containerName="rabbitmq" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.969727 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0095712-262e-4562-afac-0f2f4372224d" containerName="rabbitmq" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.970017 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0095712-262e-4562-afac-0f2f4372224d" containerName="rabbitmq" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.971453 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.978680 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.980366 5012 scope.go:117] "RemoveContainer" containerID="dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32" Feb 19 05:47:02 crc kubenswrapper[5012]: E0219 05:47:02.981215 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32\": container with ID starting with dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32 not found: ID does not exist" containerID="dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.981245 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32"} err="failed to get container status \"dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32\": rpc error: code = NotFound desc = could not find container \"dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32\": container with ID starting with dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32 not found: ID does not exist" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.981271 5012 scope.go:117] "RemoveContainer" containerID="1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f" Feb 19 05:47:02 crc kubenswrapper[5012]: E0219 05:47:02.983581 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f\": container with ID starting with 1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f not found: ID does not exist" containerID="1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.983639 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f"} err="failed to get container status \"1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f\": rpc error: code = NotFound desc = could not find container \"1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f\": container with ID starting with 1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f not found: ID does not exist" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.988152 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.988430 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.988553 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.988613 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-s7g27" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.988570 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.988562 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.995478 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.086894 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087446 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087478 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087521 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3230f97-dbe4-42a2-b009-a8370c601e78-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087544 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087585 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087603 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md7tl\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-kube-api-access-md7tl\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087623 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087658 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-config-data\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087706 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3230f97-dbe4-42a2-b009-a8370c601e78-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087739 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.189815 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.189901 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.189936 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.189974 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3230f97-dbe4-42a2-b009-a8370c601e78-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.189993 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.190030 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.190052 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md7tl\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-kube-api-access-md7tl\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.190073 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.190102 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-config-data\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.190161 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3230f97-dbe4-42a2-b009-a8370c601e78-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.190192 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.190681 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.190996 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.192892 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.194013 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.194678 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-config-data\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.194699 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.211283 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.215075 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.218964 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3230f97-dbe4-42a2-b009-a8370c601e78-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.239162 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md7tl\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-kube-api-access-md7tl\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.239816 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3230f97-dbe4-42a2-b009-a8370c601e78-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.269675 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.319609 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.614472 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728443 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-erlang-cookie\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728562 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-confd\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728611 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a13d3004-2045-4daf-a925-7eccf541b1b4-pod-info\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728662 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-tls\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728695 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a13d3004-2045-4daf-a925-7eccf541b1b4-erlang-cookie-secret\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728732 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-plugins\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728760 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zch8n\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-kube-api-access-zch8n\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728786 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728846 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-plugins-conf\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728878 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-server-conf\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728968 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-config-data\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.732562 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.734012 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a13d3004-2045-4daf-a925-7eccf541b1b4-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.734330 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.734348 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.735720 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.738423 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a13d3004-2045-4daf-a925-7eccf541b1b4-pod-info" (OuterVolumeSpecName: "pod-info") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.738640 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-kube-api-access-zch8n" (OuterVolumeSpecName: "kube-api-access-zch8n") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "kube-api-access-zch8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.742577 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.760450 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-config-data" (OuterVolumeSpecName: "config-data") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.792001 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-server-conf" (OuterVolumeSpecName: "server-conf") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836861 5012 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836889 5012 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836901 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836910 5012 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836920 5012 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a13d3004-2045-4daf-a925-7eccf541b1b4-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836928 5012 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836936 5012 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a13d3004-2045-4daf-a925-7eccf541b1b4-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836943 5012 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836953 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zch8n\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-kube-api-access-zch8n\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836972 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.858373 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.884754 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.907989 5012 generic.go:334] "Generic (PLEG): container finished" podID="a13d3004-2045-4daf-a925-7eccf541b1b4" containerID="0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7" exitCode=0 Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.908938 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13d3004-2045-4daf-a925-7eccf541b1b4","Type":"ContainerDied","Data":"0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7"} Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.909018 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13d3004-2045-4daf-a925-7eccf541b1b4","Type":"ContainerDied","Data":"856efb676cb6080920d1573427ad1823ab21a0fe78f76dfb2cca62d969151964"} Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.909107 5012 scope.go:117] "RemoveContainer" containerID="0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.909260 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.938920 5012 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.939356 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.946488 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.962831 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.964561 5012 scope.go:117] "RemoveContainer" containerID="0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.991956 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.001020 5012 scope.go:117] "RemoveContainer" containerID="0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7" Feb 19 05:47:04 crc kubenswrapper[5012]: E0219 05:47:04.001470 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7\": container with ID starting with 0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7 not found: ID does not exist" containerID="0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.001499 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7"} err="failed to get container status \"0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7\": rpc error: code = NotFound desc = could not find container \"0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7\": container with ID starting with 0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7 not found: ID does not exist" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.001515 5012 scope.go:117] "RemoveContainer" containerID="0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.001561 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:47:04 crc kubenswrapper[5012]: E0219 05:47:04.002043 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" containerName="setup-container" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.002061 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" containerName="setup-container" Feb 19 05:47:04 crc kubenswrapper[5012]: E0219 05:47:04.002069 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" containerName="rabbitmq" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.002076 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" containerName="rabbitmq" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.002276 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" containerName="rabbitmq" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.003624 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: E0219 05:47:04.005095 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e\": container with ID starting with 0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e not found: ID does not exist" containerID="0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.005117 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e"} err="failed to get container status \"0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e\": rpc error: code = NotFound desc = could not find container \"0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e\": container with ID starting with 0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e not found: ID does not exist" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.006187 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.006314 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.007120 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.007415 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.007793 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.007959 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hd6wk" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.008009 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.014878 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143091 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143141 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143180 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143202 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143224 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143254 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8khhf\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-kube-api-access-8khhf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143362 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143397 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143425 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143453 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143475 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244697 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244738 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244763 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244783 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244799 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244819 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8khhf\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-kube-api-access-8khhf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244859 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244886 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244906 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244925 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244944 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.246572 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.247064 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.247261 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.247440 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.247502 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.248005 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.250760 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.252373 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.254215 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.272331 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8khhf\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-kube-api-access-8khhf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.289922 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.318566 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.477773 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.727437 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" path="/var/lib/kubelet/pods/a13d3004-2045-4daf-a925-7eccf541b1b4/volumes" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.728849 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0095712-262e-4562-afac-0f2f4372224d" path="/var/lib/kubelet/pods/b0095712-262e-4562-afac-0f2f4372224d/volumes" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.932586 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3230f97-dbe4-42a2-b009-a8370c601e78","Type":"ContainerStarted","Data":"d354f62d44e5f63a95b6b079a247b5cc3acc6fb019f9aee96c3454d111e96a36"} Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.988222 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:47:05 crc kubenswrapper[5012]: I0219 05:47:05.943371 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4984f0c1-33e8-4506-b6d7-e554dca0e4c8","Type":"ContainerStarted","Data":"e5696085150a0dd54f0403f350a377e9916faf97409e0b34d5b0f533eb7c29d9"} Feb 19 05:47:05 crc kubenswrapper[5012]: I0219 05:47:05.945687 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3230f97-dbe4-42a2-b009-a8370c601e78","Type":"ContainerStarted","Data":"5b94da6e2f63b65256a5323ada20105efbf8a87206940d39e3ae90200a8c11c8"} Feb 19 05:47:06 crc kubenswrapper[5012]: I0219 05:47:06.958994 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4984f0c1-33e8-4506-b6d7-e554dca0e4c8","Type":"ContainerStarted","Data":"efce8bcde4f16e1cdff511b08b15a8a5dfb4bcdfa22431bcd1cde7bae1124379"} Feb 19 05:47:10 crc kubenswrapper[5012]: I0219 05:47:10.971756 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-645c76756f-nk9vx"] Feb 19 05:47:10 crc kubenswrapper[5012]: I0219 05:47:10.975528 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:10 crc kubenswrapper[5012]: I0219 05:47:10.978255 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 19 05:47:10 crc kubenswrapper[5012]: I0219 05:47:10.981892 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-645c76756f-nk9vx"] Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.024242 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7k6c\" (UniqueName: \"kubernetes.io/projected/5e2a5c46-de05-416e-886e-f52dadc04d9f-kube-api-access-z7k6c\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.024546 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-config\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.039746 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-svc\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.039839 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-openstack-edpm-ipam\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.040079 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-sb\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.040133 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-nb\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.040187 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-swift-storage-0\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.142522 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-svc\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.142609 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-openstack-edpm-ipam\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.142755 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-sb\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.142799 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-nb\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.142844 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-swift-storage-0\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.142919 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7k6c\" (UniqueName: \"kubernetes.io/projected/5e2a5c46-de05-416e-886e-f52dadc04d9f-kube-api-access-z7k6c\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.143009 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-config\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.144610 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-openstack-edpm-ipam\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.144622 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-svc\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.145442 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-config\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.145842 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-nb\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.146565 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-sb\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.147591 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-swift-storage-0\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.170420 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7k6c\" (UniqueName: \"kubernetes.io/projected/5e2a5c46-de05-416e-886e-f52dadc04d9f-kube-api-access-z7k6c\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.319938 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.648258 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-645c76756f-nk9vx"] Feb 19 05:47:12 crc kubenswrapper[5012]: I0219 05:47:12.068539 5012 generic.go:334] "Generic (PLEG): container finished" podID="5e2a5c46-de05-416e-886e-f52dadc04d9f" containerID="edfd4a155bf6b965b21fa78e413f2f1a7e00b71bc7aa7c9a548683c08bce3705" exitCode=0 Feb 19 05:47:12 crc kubenswrapper[5012]: I0219 05:47:12.068625 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" event={"ID":"5e2a5c46-de05-416e-886e-f52dadc04d9f","Type":"ContainerDied","Data":"edfd4a155bf6b965b21fa78e413f2f1a7e00b71bc7aa7c9a548683c08bce3705"} Feb 19 05:47:12 crc kubenswrapper[5012]: I0219 05:47:12.068860 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" event={"ID":"5e2a5c46-de05-416e-886e-f52dadc04d9f","Type":"ContainerStarted","Data":"69c77bc6b304c47cc6f82821180fd1b3759319ae800b54ff837c06429e637adf"} Feb 19 05:47:13 crc kubenswrapper[5012]: I0219 05:47:13.085970 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" event={"ID":"5e2a5c46-de05-416e-886e-f52dadc04d9f","Type":"ContainerStarted","Data":"8c2da3852706a5c0cbb7c3369248406b0a90c7dfcb08b14d1ad6a6b5c78521d6"} Feb 19 05:47:13 crc kubenswrapper[5012]: I0219 05:47:13.086775 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:13 crc kubenswrapper[5012]: I0219 05:47:13.128690 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" podStartSLOduration=3.128669109 podStartE2EDuration="3.128669109s" podCreationTimestamp="2026-02-19 05:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:47:13.119579778 +0000 UTC m=+1329.152902387" watchObservedRunningTime="2026-02-19 05:47:13.128669109 +0000 UTC m=+1329.161991688" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.322625 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.459107 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85446bf977-vzlgl"] Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.461513 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" podUID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" containerName="dnsmasq-dns" containerID="cri-o://d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498" gracePeriod=10 Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.602801 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-567c7bc999-cgf2v"] Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.604455 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.625025 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-567c7bc999-cgf2v"] Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.712082 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-ovsdbserver-sb\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.712136 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x65g8\" (UniqueName: \"kubernetes.io/projected/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-kube-api-access-x65g8\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.712161 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-config\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.712183 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-dns-svc\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.712237 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-dns-swift-storage-0\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.712252 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-openstack-edpm-ipam\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.712281 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-ovsdbserver-nb\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.814135 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-dns-swift-storage-0\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.814173 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-openstack-edpm-ipam\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.814218 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-ovsdbserver-nb\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.814857 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-ovsdbserver-sb\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.814998 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x65g8\" (UniqueName: \"kubernetes.io/projected/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-kube-api-access-x65g8\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.815005 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-openstack-edpm-ipam\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.815086 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-config\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.815161 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-dns-svc\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.816324 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-ovsdbserver-sb\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.816539 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-dns-svc\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.816695 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-config\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.816759 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-dns-swift-storage-0\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.816906 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-ovsdbserver-nb\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.853756 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x65g8\" (UniqueName: \"kubernetes.io/projected/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-kube-api-access-x65g8\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.952566 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.034005 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.121424 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-sb\") pod \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.121481 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-swift-storage-0\") pod \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.121522 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-svc\") pod \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.121605 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-nb\") pod \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.121678 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb4s2\" (UniqueName: \"kubernetes.io/projected/0ee4ae6f-65e3-4467-8302-54381eeebd5a-kube-api-access-nb4s2\") pod \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.121743 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-config\") pod \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.130916 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee4ae6f-65e3-4467-8302-54381eeebd5a-kube-api-access-nb4s2" (OuterVolumeSpecName: "kube-api-access-nb4s2") pod "0ee4ae6f-65e3-4467-8302-54381eeebd5a" (UID: "0ee4ae6f-65e3-4467-8302-54381eeebd5a"). InnerVolumeSpecName "kube-api-access-nb4s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.173432 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ee4ae6f-65e3-4467-8302-54381eeebd5a" (UID: "0ee4ae6f-65e3-4467-8302-54381eeebd5a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.176809 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ee4ae6f-65e3-4467-8302-54381eeebd5a" (UID: "0ee4ae6f-65e3-4467-8302-54381eeebd5a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.183284 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-config" (OuterVolumeSpecName: "config") pod "0ee4ae6f-65e3-4467-8302-54381eeebd5a" (UID: "0ee4ae6f-65e3-4467-8302-54381eeebd5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.191195 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ee4ae6f-65e3-4467-8302-54381eeebd5a" (UID: "0ee4ae6f-65e3-4467-8302-54381eeebd5a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.194962 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0ee4ae6f-65e3-4467-8302-54381eeebd5a" (UID: "0ee4ae6f-65e3-4467-8302-54381eeebd5a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.196256 5012 generic.go:334] "Generic (PLEG): container finished" podID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" containerID="d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498" exitCode=0 Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.196294 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" event={"ID":"0ee4ae6f-65e3-4467-8302-54381eeebd5a","Type":"ContainerDied","Data":"d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498"} Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.196333 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" event={"ID":"0ee4ae6f-65e3-4467-8302-54381eeebd5a","Type":"ContainerDied","Data":"76c330a33b78602a2e427fa0cfc346da48f97fdaa4760b156caa3f21371da964"} Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.196349 5012 scope.go:117] "RemoveContainer" containerID="d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.196472 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.224807 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.224853 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.224872 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.224888 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb4s2\" (UniqueName: \"kubernetes.io/projected/0ee4ae6f-65e3-4467-8302-54381eeebd5a-kube-api-access-nb4s2\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.224906 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.224924 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.225829 5012 scope.go:117] "RemoveContainer" containerID="26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.239423 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85446bf977-vzlgl"] Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.247780 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85446bf977-vzlgl"] Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.256450 5012 scope.go:117] "RemoveContainer" containerID="d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498" Feb 19 05:47:22 crc kubenswrapper[5012]: E0219 05:47:22.256908 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498\": container with ID starting with d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498 not found: ID does not exist" containerID="d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.256938 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498"} err="failed to get container status \"d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498\": rpc error: code = NotFound desc = could not find container \"d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498\": container with ID starting with d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498 not found: ID does not exist" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.256958 5012 scope.go:117] "RemoveContainer" containerID="26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488" Feb 19 05:47:22 crc kubenswrapper[5012]: E0219 05:47:22.257369 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488\": container with ID starting with 26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488 not found: ID does not exist" containerID="26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.257403 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488"} err="failed to get container status \"26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488\": rpc error: code = NotFound desc = could not find container \"26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488\": container with ID starting with 26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488 not found: ID does not exist" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.403853 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-567c7bc999-cgf2v"] Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.714364 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" path="/var/lib/kubelet/pods/0ee4ae6f-65e3-4467-8302-54381eeebd5a/volumes" Feb 19 05:47:23 crc kubenswrapper[5012]: I0219 05:47:23.228292 5012 generic.go:334] "Generic (PLEG): container finished" podID="c2eab861-ab13-4ab1-b57f-fecf9e95b9be" containerID="25dd4695daea4f17fcb807b874ac4e301bc313a41f0e67b92a9e96821a21a22b" exitCode=0 Feb 19 05:47:23 crc kubenswrapper[5012]: I0219 05:47:23.228384 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" event={"ID":"c2eab861-ab13-4ab1-b57f-fecf9e95b9be","Type":"ContainerDied","Data":"25dd4695daea4f17fcb807b874ac4e301bc313a41f0e67b92a9e96821a21a22b"} Feb 19 05:47:23 crc kubenswrapper[5012]: I0219 05:47:23.228424 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" event={"ID":"c2eab861-ab13-4ab1-b57f-fecf9e95b9be","Type":"ContainerStarted","Data":"cbf8ac96d7b31681d13f7cfef4554591ea01771a74e619e1f160646a3c89d3a4"} Feb 19 05:47:24 crc kubenswrapper[5012]: I0219 05:47:24.251547 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" event={"ID":"c2eab861-ab13-4ab1-b57f-fecf9e95b9be","Type":"ContainerStarted","Data":"875d6defbdeb9cb80a35af95c4c12714cb388e66161dba52f95953708cc102c9"} Feb 19 05:47:24 crc kubenswrapper[5012]: I0219 05:47:24.251783 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:24 crc kubenswrapper[5012]: I0219 05:47:24.286904 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" podStartSLOduration=3.286885845 podStartE2EDuration="3.286885845s" podCreationTimestamp="2026-02-19 05:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:47:24.283357019 +0000 UTC m=+1340.316679588" watchObservedRunningTime="2026-02-19 05:47:24.286885845 +0000 UTC m=+1340.320208414" Feb 19 05:47:31 crc kubenswrapper[5012]: I0219 05:47:31.954569 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.067944 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-645c76756f-nk9vx"] Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.068253 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" podUID="5e2a5c46-de05-416e-886e-f52dadc04d9f" containerName="dnsmasq-dns" containerID="cri-o://8c2da3852706a5c0cbb7c3369248406b0a90c7dfcb08b14d1ad6a6b5c78521d6" gracePeriod=10 Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.358902 5012 generic.go:334] "Generic (PLEG): container finished" podID="5e2a5c46-de05-416e-886e-f52dadc04d9f" containerID="8c2da3852706a5c0cbb7c3369248406b0a90c7dfcb08b14d1ad6a6b5c78521d6" exitCode=0 Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.358978 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" event={"ID":"5e2a5c46-de05-416e-886e-f52dadc04d9f","Type":"ContainerDied","Data":"8c2da3852706a5c0cbb7c3369248406b0a90c7dfcb08b14d1ad6a6b5c78521d6"} Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.747054 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.887617 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-nb\") pod \"5e2a5c46-de05-416e-886e-f52dadc04d9f\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.887683 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-swift-storage-0\") pod \"5e2a5c46-de05-416e-886e-f52dadc04d9f\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.887724 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-sb\") pod \"5e2a5c46-de05-416e-886e-f52dadc04d9f\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.887749 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-config\") pod \"5e2a5c46-de05-416e-886e-f52dadc04d9f\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.887865 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-openstack-edpm-ipam\") pod \"5e2a5c46-de05-416e-886e-f52dadc04d9f\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.888063 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-svc\") pod \"5e2a5c46-de05-416e-886e-f52dadc04d9f\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.888142 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7k6c\" (UniqueName: \"kubernetes.io/projected/5e2a5c46-de05-416e-886e-f52dadc04d9f-kube-api-access-z7k6c\") pod \"5e2a5c46-de05-416e-886e-f52dadc04d9f\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.903245 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e2a5c46-de05-416e-886e-f52dadc04d9f-kube-api-access-z7k6c" (OuterVolumeSpecName: "kube-api-access-z7k6c") pod "5e2a5c46-de05-416e-886e-f52dadc04d9f" (UID: "5e2a5c46-de05-416e-886e-f52dadc04d9f"). InnerVolumeSpecName "kube-api-access-z7k6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.957656 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5e2a5c46-de05-416e-886e-f52dadc04d9f" (UID: "5e2a5c46-de05-416e-886e-f52dadc04d9f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.960629 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5e2a5c46-de05-416e-886e-f52dadc04d9f" (UID: "5e2a5c46-de05-416e-886e-f52dadc04d9f"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.963656 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5e2a5c46-de05-416e-886e-f52dadc04d9f" (UID: "5e2a5c46-de05-416e-886e-f52dadc04d9f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.964826 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5e2a5c46-de05-416e-886e-f52dadc04d9f" (UID: "5e2a5c46-de05-416e-886e-f52dadc04d9f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.965158 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-config" (OuterVolumeSpecName: "config") pod "5e2a5c46-de05-416e-886e-f52dadc04d9f" (UID: "5e2a5c46-de05-416e-886e-f52dadc04d9f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.992126 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.992158 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7k6c\" (UniqueName: \"kubernetes.io/projected/5e2a5c46-de05-416e-886e-f52dadc04d9f-kube-api-access-z7k6c\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.992175 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.992185 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.992194 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.992203 5012 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:33 crc kubenswrapper[5012]: I0219 05:47:33.003246 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5e2a5c46-de05-416e-886e-f52dadc04d9f" (UID: "5e2a5c46-de05-416e-886e-f52dadc04d9f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:33 crc kubenswrapper[5012]: I0219 05:47:33.093049 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:33 crc kubenswrapper[5012]: I0219 05:47:33.372999 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" event={"ID":"5e2a5c46-de05-416e-886e-f52dadc04d9f","Type":"ContainerDied","Data":"69c77bc6b304c47cc6f82821180fd1b3759319ae800b54ff837c06429e637adf"} Feb 19 05:47:33 crc kubenswrapper[5012]: I0219 05:47:33.373055 5012 scope.go:117] "RemoveContainer" containerID="8c2da3852706a5c0cbb7c3369248406b0a90c7dfcb08b14d1ad6a6b5c78521d6" Feb 19 05:47:33 crc kubenswrapper[5012]: I0219 05:47:33.373081 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:33 crc kubenswrapper[5012]: I0219 05:47:33.400779 5012 scope.go:117] "RemoveContainer" containerID="edfd4a155bf6b965b21fa78e413f2f1a7e00b71bc7aa7c9a548683c08bce3705" Feb 19 05:47:33 crc kubenswrapper[5012]: I0219 05:47:33.428048 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-645c76756f-nk9vx"] Feb 19 05:47:33 crc kubenswrapper[5012]: I0219 05:47:33.440849 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-645c76756f-nk9vx"] Feb 19 05:47:34 crc kubenswrapper[5012]: I0219 05:47:34.718379 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e2a5c46-de05-416e-886e-f52dadc04d9f" path="/var/lib/kubelet/pods/5e2a5c46-de05-416e-886e-f52dadc04d9f/volumes" Feb 19 05:47:39 crc kubenswrapper[5012]: I0219 05:47:39.430290 5012 generic.go:334] "Generic (PLEG): container finished" podID="c3230f97-dbe4-42a2-b009-a8370c601e78" containerID="5b94da6e2f63b65256a5323ada20105efbf8a87206940d39e3ae90200a8c11c8" exitCode=0 Feb 19 05:47:39 crc kubenswrapper[5012]: I0219 05:47:39.430342 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3230f97-dbe4-42a2-b009-a8370c601e78","Type":"ContainerDied","Data":"5b94da6e2f63b65256a5323ada20105efbf8a87206940d39e3ae90200a8c11c8"} Feb 19 05:47:40 crc kubenswrapper[5012]: I0219 05:47:40.440940 5012 generic.go:334] "Generic (PLEG): container finished" podID="4984f0c1-33e8-4506-b6d7-e554dca0e4c8" containerID="efce8bcde4f16e1cdff511b08b15a8a5dfb4bcdfa22431bcd1cde7bae1124379" exitCode=0 Feb 19 05:47:40 crc kubenswrapper[5012]: I0219 05:47:40.441092 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4984f0c1-33e8-4506-b6d7-e554dca0e4c8","Type":"ContainerDied","Data":"efce8bcde4f16e1cdff511b08b15a8a5dfb4bcdfa22431bcd1cde7bae1124379"} Feb 19 05:47:40 crc kubenswrapper[5012]: I0219 05:47:40.446970 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3230f97-dbe4-42a2-b009-a8370c601e78","Type":"ContainerStarted","Data":"2216d27f73b168891c63b5b2774965132a7c9688deeb594eb17587339fcce48f"} Feb 19 05:47:40 crc kubenswrapper[5012]: I0219 05:47:40.447217 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 05:47:40 crc kubenswrapper[5012]: I0219 05:47:40.505962 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.505939343 podStartE2EDuration="38.505939343s" podCreationTimestamp="2026-02-19 05:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:47:40.499044936 +0000 UTC m=+1356.532367505" watchObservedRunningTime="2026-02-19 05:47:40.505939343 +0000 UTC m=+1356.539261912" Feb 19 05:47:41 crc kubenswrapper[5012]: I0219 05:47:41.458407 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4984f0c1-33e8-4506-b6d7-e554dca0e4c8","Type":"ContainerStarted","Data":"aa4c8b16f0bd68c5ef564e8d9831e5c6c5e141f31a19bfd116c23ecaf084cec4"} Feb 19 05:47:41 crc kubenswrapper[5012]: I0219 05:47:41.459272 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:41 crc kubenswrapper[5012]: I0219 05:47:41.495014 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.494991434 podStartE2EDuration="38.494991434s" podCreationTimestamp="2026-02-19 05:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:47:41.489354807 +0000 UTC m=+1357.522677386" watchObservedRunningTime="2026-02-19 05:47:41.494991434 +0000 UTC m=+1357.528314013" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.720977 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267"] Feb 19 05:47:45 crc kubenswrapper[5012]: E0219 05:47:45.721763 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" containerName="dnsmasq-dns" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.721775 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" containerName="dnsmasq-dns" Feb 19 05:47:45 crc kubenswrapper[5012]: E0219 05:47:45.721788 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e2a5c46-de05-416e-886e-f52dadc04d9f" containerName="init" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.721796 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e2a5c46-de05-416e-886e-f52dadc04d9f" containerName="init" Feb 19 05:47:45 crc kubenswrapper[5012]: E0219 05:47:45.721803 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e2a5c46-de05-416e-886e-f52dadc04d9f" containerName="dnsmasq-dns" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.721809 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e2a5c46-de05-416e-886e-f52dadc04d9f" containerName="dnsmasq-dns" Feb 19 05:47:45 crc kubenswrapper[5012]: E0219 05:47:45.721833 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" containerName="init" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.721838 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" containerName="init" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.722042 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e2a5c46-de05-416e-886e-f52dadc04d9f" containerName="dnsmasq-dns" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.722069 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" containerName="dnsmasq-dns" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.722736 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.725449 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.725645 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.725742 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.725742 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.735236 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267"] Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.805036 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.805130 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.805190 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rsr4\" (UniqueName: \"kubernetes.io/projected/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-kube-api-access-4rsr4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.805367 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.249879 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.249946 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.249990 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rsr4\" (UniqueName: \"kubernetes.io/projected/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-kube-api-access-4rsr4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.250080 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.256171 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.256761 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.259746 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.274585 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rsr4\" (UniqueName: \"kubernetes.io/projected/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-kube-api-access-4rsr4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.385144 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:47 crc kubenswrapper[5012]: I0219 05:47:47.149527 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267"] Feb 19 05:47:47 crc kubenswrapper[5012]: I0219 05:47:47.539088 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" event={"ID":"61bd41ab-cfea-4df2-9be0-8321c6c11ebd","Type":"ContainerStarted","Data":"312fd7dc0497e3a4381040e75f8869b7855c91d4636c392e802e463c727cce3d"} Feb 19 05:47:53 crc kubenswrapper[5012]: I0219 05:47:53.324692 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 05:47:54 crc kubenswrapper[5012]: I0219 05:47:54.481456 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:57 crc kubenswrapper[5012]: I0219 05:47:57.658260 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" event={"ID":"61bd41ab-cfea-4df2-9be0-8321c6c11ebd","Type":"ContainerStarted","Data":"5c5c63555870264dd8f5829bdd82139299500a7348f6bff770b40402e6e4c5e7"} Feb 19 05:47:57 crc kubenswrapper[5012]: I0219 05:47:57.672401 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" podStartSLOduration=2.487760755 podStartE2EDuration="12.672381586s" podCreationTimestamp="2026-02-19 05:47:45 +0000 UTC" firstStartedPulling="2026-02-19 05:47:47.158258126 +0000 UTC m=+1363.191580695" lastFinishedPulling="2026-02-19 05:47:57.342878917 +0000 UTC m=+1373.376201526" observedRunningTime="2026-02-19 05:47:57.671541816 +0000 UTC m=+1373.704864385" watchObservedRunningTime="2026-02-19 05:47:57.672381586 +0000 UTC m=+1373.705704175" Feb 19 05:48:08 crc kubenswrapper[5012]: I0219 05:48:08.794585 5012 generic.go:334] "Generic (PLEG): container finished" podID="61bd41ab-cfea-4df2-9be0-8321c6c11ebd" containerID="5c5c63555870264dd8f5829bdd82139299500a7348f6bff770b40402e6e4c5e7" exitCode=0 Feb 19 05:48:08 crc kubenswrapper[5012]: I0219 05:48:08.794696 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" event={"ID":"61bd41ab-cfea-4df2-9be0-8321c6c11ebd","Type":"ContainerDied","Data":"5c5c63555870264dd8f5829bdd82139299500a7348f6bff770b40402e6e4c5e7"} Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.330587 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.410956 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-repo-setup-combined-ca-bundle\") pod \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.411020 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rsr4\" (UniqueName: \"kubernetes.io/projected/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-kube-api-access-4rsr4\") pod \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.411125 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-inventory\") pod \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.411253 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-ssh-key-openstack-edpm-ipam\") pod \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.419597 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "61bd41ab-cfea-4df2-9be0-8321c6c11ebd" (UID: "61bd41ab-cfea-4df2-9be0-8321c6c11ebd"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.424092 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-kube-api-access-4rsr4" (OuterVolumeSpecName: "kube-api-access-4rsr4") pod "61bd41ab-cfea-4df2-9be0-8321c6c11ebd" (UID: "61bd41ab-cfea-4df2-9be0-8321c6c11ebd"). InnerVolumeSpecName "kube-api-access-4rsr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.459985 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "61bd41ab-cfea-4df2-9be0-8321c6c11ebd" (UID: "61bd41ab-cfea-4df2-9be0-8321c6c11ebd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.462081 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-inventory" (OuterVolumeSpecName: "inventory") pod "61bd41ab-cfea-4df2-9be0-8321c6c11ebd" (UID: "61bd41ab-cfea-4df2-9be0-8321c6c11ebd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.514767 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.514806 5012 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.514821 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rsr4\" (UniqueName: \"kubernetes.io/projected/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-kube-api-access-4rsr4\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.514834 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.831603 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" event={"ID":"61bd41ab-cfea-4df2-9be0-8321c6c11ebd","Type":"ContainerDied","Data":"312fd7dc0497e3a4381040e75f8869b7855c91d4636c392e802e463c727cce3d"} Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.832042 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="312fd7dc0497e3a4381040e75f8869b7855c91d4636c392e802e463c727cce3d" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.832127 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.924165 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd"] Feb 19 05:48:10 crc kubenswrapper[5012]: E0219 05:48:10.924572 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61bd41ab-cfea-4df2-9be0-8321c6c11ebd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.924588 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="61bd41ab-cfea-4df2-9be0-8321c6c11ebd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.924765 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="61bd41ab-cfea-4df2-9be0-8321c6c11ebd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.925397 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.928335 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.928567 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.928760 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.929783 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.942322 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd"] Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.028865 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj5p7\" (UniqueName: \"kubernetes.io/projected/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-kube-api-access-wj5p7\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.029016 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.029063 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.130719 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.131002 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.131215 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj5p7\" (UniqueName: \"kubernetes.io/projected/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-kube-api-access-wj5p7\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.134725 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.147820 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj5p7\" (UniqueName: \"kubernetes.io/projected/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-kube-api-access-wj5p7\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.148031 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.242478 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.786821 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd"] Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.852155 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" event={"ID":"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf","Type":"ContainerStarted","Data":"ebc32bcde894477353387c9790c11ae4abdee0c0ff7499b7ab0358220f947c8f"} Feb 19 05:48:12 crc kubenswrapper[5012]: I0219 05:48:12.869112 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" event={"ID":"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf","Type":"ContainerStarted","Data":"46af40643823a33247a982604a9c72359b1d69b943d4655962b2a8e91ff0bdef"} Feb 19 05:48:12 crc kubenswrapper[5012]: I0219 05:48:12.892888 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" podStartSLOduration=2.443493901 podStartE2EDuration="2.892862598s" podCreationTimestamp="2026-02-19 05:48:10 +0000 UTC" firstStartedPulling="2026-02-19 05:48:11.793922443 +0000 UTC m=+1387.827245012" lastFinishedPulling="2026-02-19 05:48:12.24329114 +0000 UTC m=+1388.276613709" observedRunningTime="2026-02-19 05:48:12.888468821 +0000 UTC m=+1388.921791430" watchObservedRunningTime="2026-02-19 05:48:12.892862598 +0000 UTC m=+1388.926185207" Feb 19 05:48:15 crc kubenswrapper[5012]: I0219 05:48:15.903136 5012 generic.go:334] "Generic (PLEG): container finished" podID="07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf" containerID="46af40643823a33247a982604a9c72359b1d69b943d4655962b2a8e91ff0bdef" exitCode=0 Feb 19 05:48:15 crc kubenswrapper[5012]: I0219 05:48:15.903546 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" event={"ID":"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf","Type":"ContainerDied","Data":"46af40643823a33247a982604a9c72359b1d69b943d4655962b2a8e91ff0bdef"} Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.445652 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.632197 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-inventory\") pod \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.632399 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-ssh-key-openstack-edpm-ipam\") pod \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.633539 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj5p7\" (UniqueName: \"kubernetes.io/projected/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-kube-api-access-wj5p7\") pod \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.658644 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-kube-api-access-wj5p7" (OuterVolumeSpecName: "kube-api-access-wj5p7") pod "07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf" (UID: "07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf"). InnerVolumeSpecName "kube-api-access-wj5p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.680048 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf" (UID: "07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.691199 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-inventory" (OuterVolumeSpecName: "inventory") pod "07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf" (UID: "07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.737633 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.737694 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.737717 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj5p7\" (UniqueName: \"kubernetes.io/projected/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-kube-api-access-wj5p7\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.930425 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" event={"ID":"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf","Type":"ContainerDied","Data":"ebc32bcde894477353387c9790c11ae4abdee0c0ff7499b7ab0358220f947c8f"} Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.930494 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebc32bcde894477353387c9790c11ae4abdee0c0ff7499b7ab0358220f947c8f" Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.930510 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.064387 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb"] Feb 19 05:48:18 crc kubenswrapper[5012]: E0219 05:48:18.064857 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.064871 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.065088 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.065777 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.069332 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v85qw\" (UniqueName: \"kubernetes.io/projected/ebf47868-aec9-4f2e-8c08-499161f45b18-kube-api-access-v85qw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.069395 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.069488 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.069667 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.069695 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.069727 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.070019 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.070075 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.103366 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb"] Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.171256 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v85qw\" (UniqueName: \"kubernetes.io/projected/ebf47868-aec9-4f2e-8c08-499161f45b18-kube-api-access-v85qw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.171406 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.171470 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.171493 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.175673 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.175730 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.176929 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.200584 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v85qw\" (UniqueName: \"kubernetes.io/projected/ebf47868-aec9-4f2e-8c08-499161f45b18-kube-api-access-v85qw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.393427 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:19 crc kubenswrapper[5012]: I0219 05:48:19.100956 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb"] Feb 19 05:48:19 crc kubenswrapper[5012]: I0219 05:48:19.968585 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" event={"ID":"ebf47868-aec9-4f2e-8c08-499161f45b18","Type":"ContainerStarted","Data":"29f44073d44e2b9740f67ef79845a00f042a9a8a60fe1de16bde1fbb0612c36e"} Feb 19 05:48:19 crc kubenswrapper[5012]: I0219 05:48:19.969746 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" event={"ID":"ebf47868-aec9-4f2e-8c08-499161f45b18","Type":"ContainerStarted","Data":"47a7d8825e1acf3d49de6e07e3e26af34d28c13a97cb0ebcbb15be03c70da6f3"} Feb 19 05:48:19 crc kubenswrapper[5012]: I0219 05:48:19.997891 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" podStartSLOduration=1.611738653 podStartE2EDuration="1.997867471s" podCreationTimestamp="2026-02-19 05:48:18 +0000 UTC" firstStartedPulling="2026-02-19 05:48:19.094621937 +0000 UTC m=+1395.127944546" lastFinishedPulling="2026-02-19 05:48:19.480750755 +0000 UTC m=+1395.514073364" observedRunningTime="2026-02-19 05:48:19.989872076 +0000 UTC m=+1396.023194655" watchObservedRunningTime="2026-02-19 05:48:19.997867471 +0000 UTC m=+1396.031190040" Feb 19 05:48:32 crc kubenswrapper[5012]: I0219 05:48:32.835821 5012 scope.go:117] "RemoveContainer" containerID="8f0dc1aa57e08411f9d0f619e65ecab31defd41e57bdd287ce850d95e5dc2423" Feb 19 05:48:32 crc kubenswrapper[5012]: I0219 05:48:32.880224 5012 scope.go:117] "RemoveContainer" containerID="abc0139cac003d44d29c14053f3981b5bda18d4f49ee4f01ff970a93700f4fc7" Feb 19 05:48:32 crc kubenswrapper[5012]: I0219 05:48:32.952953 5012 scope.go:117] "RemoveContainer" containerID="d8e57b0f2b52b5aa983f227ca12d7b7d13d90cca4cada2357120cb84084b1554" Feb 19 05:48:32 crc kubenswrapper[5012]: I0219 05:48:32.995801 5012 scope.go:117] "RemoveContainer" containerID="110e2fb48dbdbaaee96e12fd6145e56296c9e6c4ec3ed95da58954f821868b52" Feb 19 05:48:34 crc kubenswrapper[5012]: I0219 05:48:34.931250 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dm5jf"] Feb 19 05:48:34 crc kubenswrapper[5012]: I0219 05:48:34.938071 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:34 crc kubenswrapper[5012]: I0219 05:48:34.947522 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dm5jf"] Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.086099 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-utilities\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.086198 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6mlb\" (UniqueName: \"kubernetes.io/projected/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-kube-api-access-q6mlb\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.086292 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-catalog-content\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.188654 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6mlb\" (UniqueName: \"kubernetes.io/projected/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-kube-api-access-q6mlb\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.188783 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-catalog-content\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.188900 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-utilities\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.189390 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-utilities\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.190000 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-catalog-content\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.214118 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6mlb\" (UniqueName: \"kubernetes.io/projected/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-kube-api-access-q6mlb\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.310327 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.791087 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dm5jf"] Feb 19 05:48:36 crc kubenswrapper[5012]: I0219 05:48:36.176572 5012 generic.go:334] "Generic (PLEG): container finished" podID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerID="96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481" exitCode=0 Feb 19 05:48:36 crc kubenswrapper[5012]: I0219 05:48:36.176622 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm5jf" event={"ID":"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4","Type":"ContainerDied","Data":"96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481"} Feb 19 05:48:36 crc kubenswrapper[5012]: I0219 05:48:36.176675 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm5jf" event={"ID":"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4","Type":"ContainerStarted","Data":"77be9753ec657fff700d5aa8c08179d08b1e0427037a7558362bc489f12b2bf9"} Feb 19 05:48:38 crc kubenswrapper[5012]: I0219 05:48:38.201686 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm5jf" event={"ID":"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4","Type":"ContainerStarted","Data":"4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc"} Feb 19 05:48:40 crc kubenswrapper[5012]: I0219 05:48:40.225189 5012 generic.go:334] "Generic (PLEG): container finished" podID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerID="4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc" exitCode=0 Feb 19 05:48:40 crc kubenswrapper[5012]: I0219 05:48:40.225285 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm5jf" event={"ID":"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4","Type":"ContainerDied","Data":"4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc"} Feb 19 05:48:41 crc kubenswrapper[5012]: I0219 05:48:41.239319 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm5jf" event={"ID":"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4","Type":"ContainerStarted","Data":"77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69"} Feb 19 05:48:41 crc kubenswrapper[5012]: I0219 05:48:41.266390 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dm5jf" podStartSLOduration=2.8300190069999998 podStartE2EDuration="7.266370789s" podCreationTimestamp="2026-02-19 05:48:34 +0000 UTC" firstStartedPulling="2026-02-19 05:48:36.178717626 +0000 UTC m=+1412.212040195" lastFinishedPulling="2026-02-19 05:48:40.615069408 +0000 UTC m=+1416.648391977" observedRunningTime="2026-02-19 05:48:41.260776463 +0000 UTC m=+1417.294099032" watchObservedRunningTime="2026-02-19 05:48:41.266370789 +0000 UTC m=+1417.299693358" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.530393 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zcjd2"] Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.535567 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.544022 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcjd2"] Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.703476 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cdlb\" (UniqueName: \"kubernetes.io/projected/934d7854-a117-4051-a05a-034327616c89-kube-api-access-4cdlb\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.703726 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-utilities\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.703761 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-catalog-content\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.808320 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cdlb\" (UniqueName: \"kubernetes.io/projected/934d7854-a117-4051-a05a-034327616c89-kube-api-access-4cdlb\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.808424 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-utilities\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.808443 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-catalog-content\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.808917 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-catalog-content\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.809027 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-utilities\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.830607 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cdlb\" (UniqueName: \"kubernetes.io/projected/934d7854-a117-4051-a05a-034327616c89-kube-api-access-4cdlb\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.863370 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:44 crc kubenswrapper[5012]: I0219 05:48:44.422384 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcjd2"] Feb 19 05:48:44 crc kubenswrapper[5012]: I0219 05:48:44.431116 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:48:44 crc kubenswrapper[5012]: I0219 05:48:44.431173 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:48:45 crc kubenswrapper[5012]: I0219 05:48:45.311425 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:45 crc kubenswrapper[5012]: I0219 05:48:45.311781 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:45 crc kubenswrapper[5012]: I0219 05:48:45.326707 5012 generic.go:334] "Generic (PLEG): container finished" podID="934d7854-a117-4051-a05a-034327616c89" containerID="391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9" exitCode=0 Feb 19 05:48:45 crc kubenswrapper[5012]: I0219 05:48:45.326783 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcjd2" event={"ID":"934d7854-a117-4051-a05a-034327616c89","Type":"ContainerDied","Data":"391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9"} Feb 19 05:48:45 crc kubenswrapper[5012]: I0219 05:48:45.326861 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcjd2" event={"ID":"934d7854-a117-4051-a05a-034327616c89","Type":"ContainerStarted","Data":"6945b042fdd5b863d7b689dc7a424c7059487e0b30d92b16e72e614d02f9e037"} Feb 19 05:48:47 crc kubenswrapper[5012]: I0219 05:48:46.337045 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcjd2" event={"ID":"934d7854-a117-4051-a05a-034327616c89","Type":"ContainerStarted","Data":"8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219"} Feb 19 05:48:47 crc kubenswrapper[5012]: I0219 05:48:47.793134 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dm5jf" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="registry-server" probeResult="failure" output=< Feb 19 05:48:47 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 05:48:47 crc kubenswrapper[5012]: > Feb 19 05:48:48 crc kubenswrapper[5012]: I0219 05:48:48.360856 5012 generic.go:334] "Generic (PLEG): container finished" podID="934d7854-a117-4051-a05a-034327616c89" containerID="8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219" exitCode=0 Feb 19 05:48:48 crc kubenswrapper[5012]: I0219 05:48:48.360899 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcjd2" event={"ID":"934d7854-a117-4051-a05a-034327616c89","Type":"ContainerDied","Data":"8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219"} Feb 19 05:48:49 crc kubenswrapper[5012]: I0219 05:48:49.374692 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcjd2" event={"ID":"934d7854-a117-4051-a05a-034327616c89","Type":"ContainerStarted","Data":"983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797"} Feb 19 05:48:49 crc kubenswrapper[5012]: I0219 05:48:49.397551 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zcjd2" podStartSLOduration=2.903983208 podStartE2EDuration="6.397528613s" podCreationTimestamp="2026-02-19 05:48:43 +0000 UTC" firstStartedPulling="2026-02-19 05:48:45.329937666 +0000 UTC m=+1421.363260255" lastFinishedPulling="2026-02-19 05:48:48.823483091 +0000 UTC m=+1424.856805660" observedRunningTime="2026-02-19 05:48:49.392716115 +0000 UTC m=+1425.426038694" watchObservedRunningTime="2026-02-19 05:48:49.397528613 +0000 UTC m=+1425.430851202" Feb 19 05:48:53 crc kubenswrapper[5012]: I0219 05:48:53.863822 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:53 crc kubenswrapper[5012]: I0219 05:48:53.864614 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:53 crc kubenswrapper[5012]: I0219 05:48:53.919613 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:54 crc kubenswrapper[5012]: I0219 05:48:54.507516 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:54 crc kubenswrapper[5012]: I0219 05:48:54.573274 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcjd2"] Feb 19 05:48:55 crc kubenswrapper[5012]: I0219 05:48:55.387491 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:55 crc kubenswrapper[5012]: I0219 05:48:55.460949 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:56 crc kubenswrapper[5012]: I0219 05:48:56.455771 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zcjd2" podUID="934d7854-a117-4051-a05a-034327616c89" containerName="registry-server" containerID="cri-o://983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797" gracePeriod=2 Feb 19 05:48:56 crc kubenswrapper[5012]: I0219 05:48:56.576458 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dm5jf"] Feb 19 05:48:56 crc kubenswrapper[5012]: I0219 05:48:56.577147 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dm5jf" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="registry-server" containerID="cri-o://77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69" gracePeriod=2 Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.176824 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.320852 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6mlb\" (UniqueName: \"kubernetes.io/projected/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-kube-api-access-q6mlb\") pod \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.321016 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-utilities\") pod \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.321056 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-catalog-content\") pod \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.321962 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-utilities" (OuterVolumeSpecName: "utilities") pod "66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" (UID: "66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.324041 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.325870 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-kube-api-access-q6mlb" (OuterVolumeSpecName: "kube-api-access-q6mlb") pod "66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" (UID: "66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4"). InnerVolumeSpecName "kube-api-access-q6mlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.361067 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.426166 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6mlb\" (UniqueName: \"kubernetes.io/projected/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-kube-api-access-q6mlb\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.466619 5012 generic.go:334] "Generic (PLEG): container finished" podID="934d7854-a117-4051-a05a-034327616c89" containerID="983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797" exitCode=0 Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.466746 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcjd2" event={"ID":"934d7854-a117-4051-a05a-034327616c89","Type":"ContainerDied","Data":"983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797"} Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.466779 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcjd2" event={"ID":"934d7854-a117-4051-a05a-034327616c89","Type":"ContainerDied","Data":"6945b042fdd5b863d7b689dc7a424c7059487e0b30d92b16e72e614d02f9e037"} Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.466824 5012 scope.go:117] "RemoveContainer" containerID="983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.467043 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.470986 5012 generic.go:334] "Generic (PLEG): container finished" podID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerID="77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69" exitCode=0 Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.471025 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm5jf" event={"ID":"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4","Type":"ContainerDied","Data":"77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69"} Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.471129 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.471156 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm5jf" event={"ID":"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4","Type":"ContainerDied","Data":"77be9753ec657fff700d5aa8c08179d08b1e0427037a7558362bc489f12b2bf9"} Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.471389 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" (UID: "66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.493069 5012 scope.go:117] "RemoveContainer" containerID="8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.514265 5012 scope.go:117] "RemoveContainer" containerID="391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.528092 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cdlb\" (UniqueName: \"kubernetes.io/projected/934d7854-a117-4051-a05a-034327616c89-kube-api-access-4cdlb\") pod \"934d7854-a117-4051-a05a-034327616c89\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.528225 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-catalog-content\") pod \"934d7854-a117-4051-a05a-034327616c89\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.528290 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-utilities\") pod \"934d7854-a117-4051-a05a-034327616c89\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.528865 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.529084 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-utilities" (OuterVolumeSpecName: "utilities") pod "934d7854-a117-4051-a05a-034327616c89" (UID: "934d7854-a117-4051-a05a-034327616c89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.530944 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934d7854-a117-4051-a05a-034327616c89-kube-api-access-4cdlb" (OuterVolumeSpecName: "kube-api-access-4cdlb") pod "934d7854-a117-4051-a05a-034327616c89" (UID: "934d7854-a117-4051-a05a-034327616c89"). InnerVolumeSpecName "kube-api-access-4cdlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.536649 5012 scope.go:117] "RemoveContainer" containerID="983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797" Feb 19 05:48:57 crc kubenswrapper[5012]: E0219 05:48:57.537119 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797\": container with ID starting with 983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797 not found: ID does not exist" containerID="983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.537168 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797"} err="failed to get container status \"983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797\": rpc error: code = NotFound desc = could not find container \"983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797\": container with ID starting with 983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797 not found: ID does not exist" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.537196 5012 scope.go:117] "RemoveContainer" containerID="8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219" Feb 19 05:48:57 crc kubenswrapper[5012]: E0219 05:48:57.537663 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219\": container with ID starting with 8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219 not found: ID does not exist" containerID="8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.537736 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219"} err="failed to get container status \"8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219\": rpc error: code = NotFound desc = could not find container \"8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219\": container with ID starting with 8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219 not found: ID does not exist" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.537771 5012 scope.go:117] "RemoveContainer" containerID="391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9" Feb 19 05:48:57 crc kubenswrapper[5012]: E0219 05:48:57.538194 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9\": container with ID starting with 391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9 not found: ID does not exist" containerID="391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.538219 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9"} err="failed to get container status \"391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9\": rpc error: code = NotFound desc = could not find container \"391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9\": container with ID starting with 391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9 not found: ID does not exist" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.538237 5012 scope.go:117] "RemoveContainer" containerID="77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.568120 5012 scope.go:117] "RemoveContainer" containerID="4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.575420 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "934d7854-a117-4051-a05a-034327616c89" (UID: "934d7854-a117-4051-a05a-034327616c89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.630470 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cdlb\" (UniqueName: \"kubernetes.io/projected/934d7854-a117-4051-a05a-034327616c89-kube-api-access-4cdlb\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.630510 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.630524 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.637866 5012 scope.go:117] "RemoveContainer" containerID="96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.670914 5012 scope.go:117] "RemoveContainer" containerID="77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69" Feb 19 05:48:57 crc kubenswrapper[5012]: E0219 05:48:57.671602 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69\": container with ID starting with 77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69 not found: ID does not exist" containerID="77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.671646 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69"} err="failed to get container status \"77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69\": rpc error: code = NotFound desc = could not find container \"77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69\": container with ID starting with 77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69 not found: ID does not exist" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.671670 5012 scope.go:117] "RemoveContainer" containerID="4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc" Feb 19 05:48:57 crc kubenswrapper[5012]: E0219 05:48:57.671966 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc\": container with ID starting with 4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc not found: ID does not exist" containerID="4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.671997 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc"} err="failed to get container status \"4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc\": rpc error: code = NotFound desc = could not find container \"4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc\": container with ID starting with 4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc not found: ID does not exist" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.672065 5012 scope.go:117] "RemoveContainer" containerID="96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481" Feb 19 05:48:57 crc kubenswrapper[5012]: E0219 05:48:57.672436 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481\": container with ID starting with 96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481 not found: ID does not exist" containerID="96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.672461 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481"} err="failed to get container status \"96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481\": rpc error: code = NotFound desc = could not find container \"96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481\": container with ID starting with 96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481 not found: ID does not exist" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.818988 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcjd2"] Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.833433 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcjd2"] Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.846378 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dm5jf"] Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.859175 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dm5jf"] Feb 19 05:48:58 crc kubenswrapper[5012]: I0219 05:48:58.722664 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" path="/var/lib/kubelet/pods/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4/volumes" Feb 19 05:48:58 crc kubenswrapper[5012]: I0219 05:48:58.724413 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="934d7854-a117-4051-a05a-034327616c89" path="/var/lib/kubelet/pods/934d7854-a117-4051-a05a-034327616c89/volumes" Feb 19 05:49:14 crc kubenswrapper[5012]: I0219 05:49:14.430572 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:49:14 crc kubenswrapper[5012]: I0219 05:49:14.431197 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:49:33 crc kubenswrapper[5012]: I0219 05:49:33.188816 5012 scope.go:117] "RemoveContainer" containerID="55079917653f6fec11a6880998a2eb1b86a3b903487d3ecb0aa13cd966d7990e" Feb 19 05:49:33 crc kubenswrapper[5012]: I0219 05:49:33.407437 5012 scope.go:117] "RemoveContainer" containerID="12a292fc1b8e4523fdc0fb30ca3590a1b6b6f0c70c3e42e076f92a7b213241f2" Feb 19 05:49:33 crc kubenswrapper[5012]: I0219 05:49:33.464007 5012 scope.go:117] "RemoveContainer" containerID="3fcdc6a7de1157e87df26c6381be0f82492f8c4422bc5e6ab2f42667c4a696ee" Feb 19 05:49:33 crc kubenswrapper[5012]: I0219 05:49:33.530701 5012 scope.go:117] "RemoveContainer" containerID="e454f72d42b6df4ccbea155823e52fa4dbc71ac17be418579910450da7af968d" Feb 19 05:49:44 crc kubenswrapper[5012]: I0219 05:49:44.430931 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:49:44 crc kubenswrapper[5012]: I0219 05:49:44.431708 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:49:44 crc kubenswrapper[5012]: I0219 05:49:44.431777 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:49:44 crc kubenswrapper[5012]: I0219 05:49:44.433638 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:49:44 crc kubenswrapper[5012]: I0219 05:49:44.433772 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" gracePeriod=600 Feb 19 05:49:44 crc kubenswrapper[5012]: E0219 05:49:44.588384 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:49:45 crc kubenswrapper[5012]: I0219 05:49:45.149266 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" exitCode=0 Feb 19 05:49:45 crc kubenswrapper[5012]: I0219 05:49:45.149357 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42"} Feb 19 05:49:45 crc kubenswrapper[5012]: I0219 05:49:45.149649 5012 scope.go:117] "RemoveContainer" containerID="6721017012e745bfd497807b3e0766cbf7c779446215cbbe94491f729f86c6ac" Feb 19 05:49:45 crc kubenswrapper[5012]: I0219 05:49:45.150782 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:49:45 crc kubenswrapper[5012]: E0219 05:49:45.151436 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:49:59 crc kubenswrapper[5012]: I0219 05:49:59.704217 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:49:59 crc kubenswrapper[5012]: E0219 05:49:59.705490 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:50:11 crc kubenswrapper[5012]: I0219 05:50:11.704456 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:50:11 crc kubenswrapper[5012]: E0219 05:50:11.705827 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.889355 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2tcvb"] Feb 19 05:50:18 crc kubenswrapper[5012]: E0219 05:50:18.891558 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934d7854-a117-4051-a05a-034327616c89" containerName="registry-server" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.891600 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="934d7854-a117-4051-a05a-034327616c89" containerName="registry-server" Feb 19 05:50:18 crc kubenswrapper[5012]: E0219 05:50:18.891627 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="registry-server" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.891639 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="registry-server" Feb 19 05:50:18 crc kubenswrapper[5012]: E0219 05:50:18.891660 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934d7854-a117-4051-a05a-034327616c89" containerName="extract-utilities" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.891674 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="934d7854-a117-4051-a05a-034327616c89" containerName="extract-utilities" Feb 19 05:50:18 crc kubenswrapper[5012]: E0219 05:50:18.891717 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934d7854-a117-4051-a05a-034327616c89" containerName="extract-content" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.891732 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="934d7854-a117-4051-a05a-034327616c89" containerName="extract-content" Feb 19 05:50:18 crc kubenswrapper[5012]: E0219 05:50:18.891796 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="extract-utilities" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.891808 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="extract-utilities" Feb 19 05:50:18 crc kubenswrapper[5012]: E0219 05:50:18.891824 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="extract-content" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.891838 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="extract-content" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.892209 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="934d7854-a117-4051-a05a-034327616c89" containerName="registry-server" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.892258 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="registry-server" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.894976 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.902578 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2tcvb"] Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.045593 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-utilities\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.045732 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hbf4\" (UniqueName: \"kubernetes.io/projected/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-kube-api-access-2hbf4\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.045977 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-catalog-content\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.148210 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-utilities\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.148428 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hbf4\" (UniqueName: \"kubernetes.io/projected/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-kube-api-access-2hbf4\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.148534 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-catalog-content\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.149489 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-catalog-content\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.149508 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-utilities\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.184067 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hbf4\" (UniqueName: \"kubernetes.io/projected/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-kube-api-access-2hbf4\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.255484 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.814063 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2tcvb"] Feb 19 05:50:19 crc kubenswrapper[5012]: W0219 05:50:19.816202 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd355dfcc_ca49_41f8_93b7_630cf9a2a20f.slice/crio-9617751c883719b0b88aab3abf97f5bdf679d89e37bc5c25367dcefb091fcb6e WatchSource:0}: Error finding container 9617751c883719b0b88aab3abf97f5bdf679d89e37bc5c25367dcefb091fcb6e: Status 404 returned error can't find the container with id 9617751c883719b0b88aab3abf97f5bdf679d89e37bc5c25367dcefb091fcb6e Feb 19 05:50:20 crc kubenswrapper[5012]: I0219 05:50:20.617184 5012 generic.go:334] "Generic (PLEG): container finished" podID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerID="7d9d854cd627b97bd284e92095af02bb1f0cd8fa5c703cf9288525eb0d73cbb0" exitCode=0 Feb 19 05:50:20 crc kubenswrapper[5012]: I0219 05:50:20.617417 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tcvb" event={"ID":"d355dfcc-ca49-41f8-93b7-630cf9a2a20f","Type":"ContainerDied","Data":"7d9d854cd627b97bd284e92095af02bb1f0cd8fa5c703cf9288525eb0d73cbb0"} Feb 19 05:50:20 crc kubenswrapper[5012]: I0219 05:50:20.617552 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tcvb" event={"ID":"d355dfcc-ca49-41f8-93b7-630cf9a2a20f","Type":"ContainerStarted","Data":"9617751c883719b0b88aab3abf97f5bdf679d89e37bc5c25367dcefb091fcb6e"} Feb 19 05:50:20 crc kubenswrapper[5012]: I0219 05:50:20.622365 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 05:50:21 crc kubenswrapper[5012]: I0219 05:50:21.630381 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tcvb" event={"ID":"d355dfcc-ca49-41f8-93b7-630cf9a2a20f","Type":"ContainerStarted","Data":"5b1ada347824fa800386982818c8efb6c13ea89793c8a09d04b560111d8a1555"} Feb 19 05:50:22 crc kubenswrapper[5012]: I0219 05:50:22.647508 5012 generic.go:334] "Generic (PLEG): container finished" podID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerID="5b1ada347824fa800386982818c8efb6c13ea89793c8a09d04b560111d8a1555" exitCode=0 Feb 19 05:50:22 crc kubenswrapper[5012]: I0219 05:50:22.647611 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tcvb" event={"ID":"d355dfcc-ca49-41f8-93b7-630cf9a2a20f","Type":"ContainerDied","Data":"5b1ada347824fa800386982818c8efb6c13ea89793c8a09d04b560111d8a1555"} Feb 19 05:50:23 crc kubenswrapper[5012]: I0219 05:50:23.663903 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tcvb" event={"ID":"d355dfcc-ca49-41f8-93b7-630cf9a2a20f","Type":"ContainerStarted","Data":"acd498e678b373195185b97d704769b9fd8eb1a02b368a7997d026612e487a3a"} Feb 19 05:50:23 crc kubenswrapper[5012]: I0219 05:50:23.701590 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2tcvb" podStartSLOduration=3.254314919 podStartE2EDuration="5.70155704s" podCreationTimestamp="2026-02-19 05:50:18 +0000 UTC" firstStartedPulling="2026-02-19 05:50:20.621937612 +0000 UTC m=+1516.655260221" lastFinishedPulling="2026-02-19 05:50:23.069179733 +0000 UTC m=+1519.102502342" observedRunningTime="2026-02-19 05:50:23.694763964 +0000 UTC m=+1519.728086573" watchObservedRunningTime="2026-02-19 05:50:23.70155704 +0000 UTC m=+1519.734879649" Feb 19 05:50:25 crc kubenswrapper[5012]: I0219 05:50:25.702762 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:50:25 crc kubenswrapper[5012]: E0219 05:50:25.703268 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:50:29 crc kubenswrapper[5012]: I0219 05:50:29.256375 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:29 crc kubenswrapper[5012]: I0219 05:50:29.257273 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:29 crc kubenswrapper[5012]: I0219 05:50:29.340033 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:29 crc kubenswrapper[5012]: I0219 05:50:29.825268 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:29 crc kubenswrapper[5012]: I0219 05:50:29.893702 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2tcvb"] Feb 19 05:50:31 crc kubenswrapper[5012]: I0219 05:50:31.783045 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2tcvb" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerName="registry-server" containerID="cri-o://acd498e678b373195185b97d704769b9fd8eb1a02b368a7997d026612e487a3a" gracePeriod=2 Feb 19 05:50:32 crc kubenswrapper[5012]: I0219 05:50:32.814178 5012 generic.go:334] "Generic (PLEG): container finished" podID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerID="acd498e678b373195185b97d704769b9fd8eb1a02b368a7997d026612e487a3a" exitCode=0 Feb 19 05:50:32 crc kubenswrapper[5012]: I0219 05:50:32.814553 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tcvb" event={"ID":"d355dfcc-ca49-41f8-93b7-630cf9a2a20f","Type":"ContainerDied","Data":"acd498e678b373195185b97d704769b9fd8eb1a02b368a7997d026612e487a3a"} Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.460714 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.610975 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hbf4\" (UniqueName: \"kubernetes.io/projected/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-kube-api-access-2hbf4\") pod \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.611222 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-catalog-content\") pod \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.611267 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-utilities\") pod \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.612330 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-utilities" (OuterVolumeSpecName: "utilities") pod "d355dfcc-ca49-41f8-93b7-630cf9a2a20f" (UID: "d355dfcc-ca49-41f8-93b7-630cf9a2a20f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.619890 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-kube-api-access-2hbf4" (OuterVolumeSpecName: "kube-api-access-2hbf4") pod "d355dfcc-ca49-41f8-93b7-630cf9a2a20f" (UID: "d355dfcc-ca49-41f8-93b7-630cf9a2a20f"). InnerVolumeSpecName "kube-api-access-2hbf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.668165 5012 scope.go:117] "RemoveContainer" containerID="5011a2da1b6766de9dceb07b094e5e5b90457583e5b1d7f21e441d5bc980ef81" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.690990 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d355dfcc-ca49-41f8-93b7-630cf9a2a20f" (UID: "d355dfcc-ca49-41f8-93b7-630cf9a2a20f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.708616 5012 scope.go:117] "RemoveContainer" containerID="8a02fea3b4cd70626ac243cec71c2d7a481574c8f18cffc243a46c68a245c413" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.715238 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hbf4\" (UniqueName: \"kubernetes.io/projected/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-kube-api-access-2hbf4\") on node \"crc\" DevicePath \"\"" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.715286 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.715328 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.736268 5012 scope.go:117] "RemoveContainer" containerID="1bf5d73af424c2f421bc54586605dbed2a0980894768360700238dc093ac82ff" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.768876 5012 scope.go:117] "RemoveContainer" containerID="f9417f3089ab939acabaf087bdedc14bb6991a7978946e02fec09196a1d9ec1c" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.792193 5012 scope.go:117] "RemoveContainer" containerID="bdf4b7c244764dd2879106070ed07ec4228686361067f77e4b0e731b44af052c" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.834398 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tcvb" event={"ID":"d355dfcc-ca49-41f8-93b7-630cf9a2a20f","Type":"ContainerDied","Data":"9617751c883719b0b88aab3abf97f5bdf679d89e37bc5c25367dcefb091fcb6e"} Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.834455 5012 scope.go:117] "RemoveContainer" containerID="acd498e678b373195185b97d704769b9fd8eb1a02b368a7997d026612e487a3a" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.834618 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.883366 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2tcvb"] Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.893410 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2tcvb"] Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.903332 5012 scope.go:117] "RemoveContainer" containerID="5b1ada347824fa800386982818c8efb6c13ea89793c8a09d04b560111d8a1555" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.990078 5012 scope.go:117] "RemoveContainer" containerID="7d9d854cd627b97bd284e92095af02bb1f0cd8fa5c703cf9288525eb0d73cbb0" Feb 19 05:50:34 crc kubenswrapper[5012]: I0219 05:50:34.719944 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" path="/var/lib/kubelet/pods/d355dfcc-ca49-41f8-93b7-630cf9a2a20f/volumes" Feb 19 05:50:39 crc kubenswrapper[5012]: I0219 05:50:39.703526 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:50:39 crc kubenswrapper[5012]: E0219 05:50:39.704573 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:50:50 crc kubenswrapper[5012]: I0219 05:50:50.702744 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:50:50 crc kubenswrapper[5012]: E0219 05:50:50.703567 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:51:05 crc kubenswrapper[5012]: I0219 05:51:05.702838 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:51:05 crc kubenswrapper[5012]: E0219 05:51:05.703873 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:51:16 crc kubenswrapper[5012]: E0219 05:51:16.950834 5012 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebf47868_aec9_4f2e_8c08_499161f45b18.slice/crio-29f44073d44e2b9740f67ef79845a00f042a9a8a60fe1de16bde1fbb0612c36e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebf47868_aec9_4f2e_8c08_499161f45b18.slice/crio-conmon-29f44073d44e2b9740f67ef79845a00f042a9a8a60fe1de16bde1fbb0612c36e.scope\": RecentStats: unable to find data in memory cache]" Feb 19 05:51:17 crc kubenswrapper[5012]: I0219 05:51:17.396991 5012 generic.go:334] "Generic (PLEG): container finished" podID="ebf47868-aec9-4f2e-8c08-499161f45b18" containerID="29f44073d44e2b9740f67ef79845a00f042a9a8a60fe1de16bde1fbb0612c36e" exitCode=0 Feb 19 05:51:17 crc kubenswrapper[5012]: I0219 05:51:17.397140 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" event={"ID":"ebf47868-aec9-4f2e-8c08-499161f45b18","Type":"ContainerDied","Data":"29f44073d44e2b9740f67ef79845a00f042a9a8a60fe1de16bde1fbb0612c36e"} Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.033512 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.112260 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v85qw\" (UniqueName: \"kubernetes.io/projected/ebf47868-aec9-4f2e-8c08-499161f45b18-kube-api-access-v85qw\") pod \"ebf47868-aec9-4f2e-8c08-499161f45b18\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.112376 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-bootstrap-combined-ca-bundle\") pod \"ebf47868-aec9-4f2e-8c08-499161f45b18\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.112530 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-inventory\") pod \"ebf47868-aec9-4f2e-8c08-499161f45b18\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.112631 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-ssh-key-openstack-edpm-ipam\") pod \"ebf47868-aec9-4f2e-8c08-499161f45b18\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.119980 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebf47868-aec9-4f2e-8c08-499161f45b18-kube-api-access-v85qw" (OuterVolumeSpecName: "kube-api-access-v85qw") pod "ebf47868-aec9-4f2e-8c08-499161f45b18" (UID: "ebf47868-aec9-4f2e-8c08-499161f45b18"). InnerVolumeSpecName "kube-api-access-v85qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.120364 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ebf47868-aec9-4f2e-8c08-499161f45b18" (UID: "ebf47868-aec9-4f2e-8c08-499161f45b18"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.153319 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-inventory" (OuterVolumeSpecName: "inventory") pod "ebf47868-aec9-4f2e-8c08-499161f45b18" (UID: "ebf47868-aec9-4f2e-8c08-499161f45b18"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.154688 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ebf47868-aec9-4f2e-8c08-499161f45b18" (UID: "ebf47868-aec9-4f2e-8c08-499161f45b18"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.216585 5012 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.216626 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.216639 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.216650 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v85qw\" (UniqueName: \"kubernetes.io/projected/ebf47868-aec9-4f2e-8c08-499161f45b18-kube-api-access-v85qw\") on node \"crc\" DevicePath \"\"" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.429629 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" event={"ID":"ebf47868-aec9-4f2e-8c08-499161f45b18","Type":"ContainerDied","Data":"47a7d8825e1acf3d49de6e07e3e26af34d28c13a97cb0ebcbb15be03c70da6f3"} Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.429691 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47a7d8825e1acf3d49de6e07e3e26af34d28c13a97cb0ebcbb15be03c70da6f3" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.429781 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.572513 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r"] Feb 19 05:51:19 crc kubenswrapper[5012]: E0219 05:51:19.573178 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf47868-aec9-4f2e-8c08-499161f45b18" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.573211 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf47868-aec9-4f2e-8c08-499161f45b18" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 05:51:19 crc kubenswrapper[5012]: E0219 05:51:19.573237 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerName="registry-server" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.573249 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerName="registry-server" Feb 19 05:51:19 crc kubenswrapper[5012]: E0219 05:51:19.573328 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerName="extract-utilities" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.573347 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerName="extract-utilities" Feb 19 05:51:19 crc kubenswrapper[5012]: E0219 05:51:19.573378 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerName="extract-content" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.573391 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerName="extract-content" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.573757 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerName="registry-server" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.573786 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf47868-aec9-4f2e-8c08-499161f45b18" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.575015 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.577197 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.578168 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.581662 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.581865 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.585704 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r"] Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.703721 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:51:19 crc kubenswrapper[5012]: E0219 05:51:19.704515 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.728727 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.729004 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.729071 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwn45\" (UniqueName: \"kubernetes.io/projected/02358307-dba6-44fa-9799-2440b1496c55-kube-api-access-nwn45\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.832041 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.832126 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwn45\" (UniqueName: \"kubernetes.io/projected/02358307-dba6-44fa-9799-2440b1496c55-kube-api-access-nwn45\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.832175 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.837930 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.839952 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.914924 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwn45\" (UniqueName: \"kubernetes.io/projected/02358307-dba6-44fa-9799-2440b1496c55-kube-api-access-nwn45\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:20 crc kubenswrapper[5012]: I0219 05:51:20.207764 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:20 crc kubenswrapper[5012]: I0219 05:51:20.832882 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r"] Feb 19 05:51:21 crc kubenswrapper[5012]: I0219 05:51:21.459006 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" event={"ID":"02358307-dba6-44fa-9799-2440b1496c55","Type":"ContainerStarted","Data":"9b8b04aec33851631b41f126ebee440f562995d257d9a25d785090f3aa327c69"} Feb 19 05:51:22 crc kubenswrapper[5012]: I0219 05:51:22.480453 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" event={"ID":"02358307-dba6-44fa-9799-2440b1496c55","Type":"ContainerStarted","Data":"eb9cec4bc6acba41d4aa46b7041e6a9f81869527c2b21dfb77744606447bcbcd"} Feb 19 05:51:22 crc kubenswrapper[5012]: I0219 05:51:22.513148 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" podStartSLOduration=3.011121993 podStartE2EDuration="3.513114817s" podCreationTimestamp="2026-02-19 05:51:19 +0000 UTC" firstStartedPulling="2026-02-19 05:51:20.834890416 +0000 UTC m=+1576.868213025" lastFinishedPulling="2026-02-19 05:51:21.33688327 +0000 UTC m=+1577.370205849" observedRunningTime="2026-02-19 05:51:22.50505684 +0000 UTC m=+1578.538379419" watchObservedRunningTime="2026-02-19 05:51:22.513114817 +0000 UTC m=+1578.546437426" Feb 19 05:51:30 crc kubenswrapper[5012]: I0219 05:51:30.703377 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:51:30 crc kubenswrapper[5012]: E0219 05:51:30.704884 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:51:33 crc kubenswrapper[5012]: I0219 05:51:33.948916 5012 scope.go:117] "RemoveContainer" containerID="694cb7239194668fdd96877662e230d283d111646e3e233d72ff54fa322e04ce" Feb 19 05:51:33 crc kubenswrapper[5012]: I0219 05:51:33.984262 5012 scope.go:117] "RemoveContainer" containerID="34a399338c013b61152c60fcd0046303ede4ee51c443dfcf2a65805c9c44defe" Feb 19 05:51:34 crc kubenswrapper[5012]: I0219 05:51:34.017103 5012 scope.go:117] "RemoveContainer" containerID="4b17f7e35bacf75c95fd5af2ce831c9268ee336939f6e0582d263b98f40338b3" Feb 19 05:51:34 crc kubenswrapper[5012]: I0219 05:51:34.053429 5012 scope.go:117] "RemoveContainer" containerID="cb200dd76cd661f7ff34b71bfb488f08698c2c8969d0994a64b2d1b69bb789ec" Feb 19 05:51:44 crc kubenswrapper[5012]: I0219 05:51:44.723196 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:51:44 crc kubenswrapper[5012]: E0219 05:51:44.724283 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:51:59 crc kubenswrapper[5012]: I0219 05:51:59.704711 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:51:59 crc kubenswrapper[5012]: E0219 05:51:59.706049 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.082371 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-r8ddf"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.096670 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-91bd-account-create-update-54r7l"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.107361 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jktc7"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.124743 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-hthfx"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.133444 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-22e2-account-create-update-vddht"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.140889 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-r8ddf"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.148598 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-91bd-account-create-update-54r7l"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.156349 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-22e2-account-create-update-vddht"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.164195 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jktc7"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.171802 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-hthfx"] Feb 19 05:52:04 crc kubenswrapper[5012]: I0219 05:52:04.720919 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12f3008a-413a-4fe7-b3c1-773c10b6b2bf" path="/var/lib/kubelet/pods/12f3008a-413a-4fe7-b3c1-773c10b6b2bf/volumes" Feb 19 05:52:04 crc kubenswrapper[5012]: I0219 05:52:04.722637 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a" path="/var/lib/kubelet/pods/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a/volumes" Feb 19 05:52:04 crc kubenswrapper[5012]: I0219 05:52:04.725214 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90a75d3b-186a-41d6-92a8-94729c520aa5" path="/var/lib/kubelet/pods/90a75d3b-186a-41d6-92a8-94729c520aa5/volumes" Feb 19 05:52:04 crc kubenswrapper[5012]: I0219 05:52:04.726975 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1e7d95a-d78a-4d54-a66b-565114b4823e" path="/var/lib/kubelet/pods/d1e7d95a-d78a-4d54-a66b-565114b4823e/volumes" Feb 19 05:52:04 crc kubenswrapper[5012]: I0219 05:52:04.729455 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1e3020d-901d-4649-9e94-c5c0a4cc523d" path="/var/lib/kubelet/pods/e1e3020d-901d-4649-9e94-c5c0a4cc523d/volumes" Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.033266 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-vjzm9"] Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.050991 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-vjzm9"] Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.065362 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-34a7-account-create-update-84f2g"] Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.076896 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b5f0-account-create-update-l7b8m"] Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.085349 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-34a7-account-create-update-84f2g"] Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.092479 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b5f0-account-create-update-l7b8m"] Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.726546 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="533d4699-332c-4ceb-ad6e-77c680699214" path="/var/lib/kubelet/pods/533d4699-332c-4ceb-ad6e-77c680699214/volumes" Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.727288 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e45e098-f689-4015-9871-5f66e5d7bef1" path="/var/lib/kubelet/pods/6e45e098-f689-4015-9871-5f66e5d7bef1/volumes" Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.728340 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a973520b-997d-4c23-a056-590c96123e43" path="/var/lib/kubelet/pods/a973520b-997d-4c23-a056-590c96123e43/volumes" Feb 19 05:52:10 crc kubenswrapper[5012]: I0219 05:52:10.703941 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:52:10 crc kubenswrapper[5012]: E0219 05:52:10.705166 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:52:17 crc kubenswrapper[5012]: I0219 05:52:17.044221 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lj2kq"] Feb 19 05:52:17 crc kubenswrapper[5012]: I0219 05:52:17.062370 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lj2kq"] Feb 19 05:52:18 crc kubenswrapper[5012]: I0219 05:52:18.713910 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c559b49-5b5e-435d-9a6a-66dd1d3cbc79" path="/var/lib/kubelet/pods/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79/volumes" Feb 19 05:52:25 crc kubenswrapper[5012]: I0219 05:52:25.702929 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:52:25 crc kubenswrapper[5012]: E0219 05:52:25.703999 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.156843 5012 scope.go:117] "RemoveContainer" containerID="0ba4832ef5cde65c22a33ecfff620cd13c71e947e2063a45381a8045e3407918" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.199528 5012 scope.go:117] "RemoveContainer" containerID="8d4101d8165775d3c785f3ad562d7ef71806f55866410c4f9e87581c5430851f" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.279638 5012 scope.go:117] "RemoveContainer" containerID="afdc318ce7e7f31c55b83d198c0056a9143debe76f4068e0b8b55a3cd789f800" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.322212 5012 scope.go:117] "RemoveContainer" containerID="c98bff27bc9812d723f9217b691c091425289e0f299460c4c4e1c7163b359d43" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.343718 5012 scope.go:117] "RemoveContainer" containerID="65e190912c6d7142d01553a587f58e32095a3f893daa4d06beb98e431777939c" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.381956 5012 scope.go:117] "RemoveContainer" containerID="573a87d5e8e95277642af154eba731e6d506fbe9be8db1436f41349ffe7bcbd4" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.435743 5012 scope.go:117] "RemoveContainer" containerID="93e7f5c5600e832347781d221af700104ca8f39c9c057fb3a233ce4702cf409c" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.470882 5012 scope.go:117] "RemoveContainer" containerID="6cb45a4049590e4fb7d60e94e092be98bdb1a162fc286f8a8013620e8c330260" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.494886 5012 scope.go:117] "RemoveContainer" containerID="0da1732600a370cfbfe77664995408f2ab300c5ef7fcf22ab0fd4f379bf54473" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.514352 5012 scope.go:117] "RemoveContainer" containerID="e1dc1ea6e87e48e7096bcfb12892dc9ac8929ba2984948549033f17095a5c4d5" Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.071931 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8f98-account-create-update-7gqc9"] Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.090583 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4vdtn"] Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.101698 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8f98-account-create-update-7gqc9"] Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.112635 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-9pk56"] Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.120170 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6b89-account-create-update-65d6l"] Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.127567 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4vdtn"] Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.134512 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6b89-account-create-update-65d6l"] Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.141654 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-9pk56"] Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.724209 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f81d2f2-d61b-49e6-bd6a-f466da52df74" path="/var/lib/kubelet/pods/0f81d2f2-d61b-49e6-bd6a-f466da52df74/volumes" Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.726090 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d452976-060b-4c25-9dd0-ffed69bb4d84" path="/var/lib/kubelet/pods/5d452976-060b-4c25-9dd0-ffed69bb4d84/volumes" Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.727484 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4bd4c60-a255-42cf-8dd0-913737e4b189" path="/var/lib/kubelet/pods/a4bd4c60-a255-42cf-8dd0-913737e4b189/volumes" Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.729057 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff889c32-0dda-4734-a907-54f4a53e649f" path="/var/lib/kubelet/pods/ff889c32-0dda-4734-a907-54f4a53e649f/volumes" Feb 19 05:52:40 crc kubenswrapper[5012]: I0219 05:52:40.050777 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-24p82"] Feb 19 05:52:40 crc kubenswrapper[5012]: I0219 05:52:40.063667 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-24p82"] Feb 19 05:52:40 crc kubenswrapper[5012]: I0219 05:52:40.703259 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:52:40 crc kubenswrapper[5012]: E0219 05:52:40.704081 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:52:40 crc kubenswrapper[5012]: I0219 05:52:40.725436 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d56d90-ce06-4de3-9edb-2092780e9afe" path="/var/lib/kubelet/pods/31d56d90-ce06-4de3-9edb-2092780e9afe/volumes" Feb 19 05:52:46 crc kubenswrapper[5012]: I0219 05:52:46.058513 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-x7kz5"] Feb 19 05:52:46 crc kubenswrapper[5012]: I0219 05:52:46.074359 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-x7kz5"] Feb 19 05:52:46 crc kubenswrapper[5012]: I0219 05:52:46.729531 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13b820bd-7677-4b9c-a16f-987e22a71876" path="/var/lib/kubelet/pods/13b820bd-7677-4b9c-a16f-987e22a71876/volumes" Feb 19 05:52:53 crc kubenswrapper[5012]: I0219 05:52:53.703445 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:52:53 crc kubenswrapper[5012]: E0219 05:52:53.705116 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:53:06 crc kubenswrapper[5012]: I0219 05:53:06.704156 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:53:06 crc kubenswrapper[5012]: E0219 05:53:06.707102 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:53:08 crc kubenswrapper[5012]: I0219 05:53:08.045872 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-gfhmj"] Feb 19 05:53:08 crc kubenswrapper[5012]: I0219 05:53:08.059009 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c723-account-create-update-n6sg9"] Feb 19 05:53:08 crc kubenswrapper[5012]: I0219 05:53:08.067095 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-gfhmj"] Feb 19 05:53:08 crc kubenswrapper[5012]: I0219 05:53:08.074620 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c723-account-create-update-n6sg9"] Feb 19 05:53:08 crc kubenswrapper[5012]: I0219 05:53:08.723445 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c63064a-a5f1-48da-b11c-eb76b04e3397" path="/var/lib/kubelet/pods/8c63064a-a5f1-48da-b11c-eb76b04e3397/volumes" Feb 19 05:53:08 crc kubenswrapper[5012]: I0219 05:53:08.724694 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd86f802-eef3-479a-870a-e34e7ce028ba" path="/var/lib/kubelet/pods/cd86f802-eef3-479a-870a-e34e7ce028ba/volumes" Feb 19 05:53:19 crc kubenswrapper[5012]: I0219 05:53:19.702912 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:53:19 crc kubenswrapper[5012]: E0219 05:53:19.703861 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:53:29 crc kubenswrapper[5012]: I0219 05:53:29.067827 5012 generic.go:334] "Generic (PLEG): container finished" podID="02358307-dba6-44fa-9799-2440b1496c55" containerID="eb9cec4bc6acba41d4aa46b7041e6a9f81869527c2b21dfb77744606447bcbcd" exitCode=0 Feb 19 05:53:29 crc kubenswrapper[5012]: I0219 05:53:29.067973 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" event={"ID":"02358307-dba6-44fa-9799-2440b1496c55","Type":"ContainerDied","Data":"eb9cec4bc6acba41d4aa46b7041e6a9f81869527c2b21dfb77744606447bcbcd"} Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.530180 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.703670 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwn45\" (UniqueName: \"kubernetes.io/projected/02358307-dba6-44fa-9799-2440b1496c55-kube-api-access-nwn45\") pod \"02358307-dba6-44fa-9799-2440b1496c55\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.704599 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-ssh-key-openstack-edpm-ipam\") pod \"02358307-dba6-44fa-9799-2440b1496c55\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.704920 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-inventory\") pod \"02358307-dba6-44fa-9799-2440b1496c55\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.716701 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02358307-dba6-44fa-9799-2440b1496c55-kube-api-access-nwn45" (OuterVolumeSpecName: "kube-api-access-nwn45") pod "02358307-dba6-44fa-9799-2440b1496c55" (UID: "02358307-dba6-44fa-9799-2440b1496c55"). InnerVolumeSpecName "kube-api-access-nwn45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.746016 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-inventory" (OuterVolumeSpecName: "inventory") pod "02358307-dba6-44fa-9799-2440b1496c55" (UID: "02358307-dba6-44fa-9799-2440b1496c55"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.753590 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "02358307-dba6-44fa-9799-2440b1496c55" (UID: "02358307-dba6-44fa-9799-2440b1496c55"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.808800 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwn45\" (UniqueName: \"kubernetes.io/projected/02358307-dba6-44fa-9799-2440b1496c55-kube-api-access-nwn45\") on node \"crc\" DevicePath \"\"" Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.808845 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.808860 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.093080 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" event={"ID":"02358307-dba6-44fa-9799-2440b1496c55","Type":"ContainerDied","Data":"9b8b04aec33851631b41f126ebee440f562995d257d9a25d785090f3aa327c69"} Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.093143 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b8b04aec33851631b41f126ebee440f562995d257d9a25d785090f3aa327c69" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.093223 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.241879 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74"] Feb 19 05:53:31 crc kubenswrapper[5012]: E0219 05:53:31.242778 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02358307-dba6-44fa-9799-2440b1496c55" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.242812 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="02358307-dba6-44fa-9799-2440b1496c55" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.243228 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="02358307-dba6-44fa-9799-2440b1496c55" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.244549 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.254703 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74"] Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.259963 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.260389 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.260649 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.264656 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.421380 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.421658 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7r84\" (UniqueName: \"kubernetes.io/projected/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-kube-api-access-d7r84\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.421811 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.524782 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7r84\" (UniqueName: \"kubernetes.io/projected/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-kube-api-access-d7r84\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.524996 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.525069 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.533633 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.538363 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.551074 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7r84\" (UniqueName: \"kubernetes.io/projected/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-kube-api-access-d7r84\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.571209 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:32 crc kubenswrapper[5012]: I0219 05:53:32.264331 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74"] Feb 19 05:53:33 crc kubenswrapper[5012]: I0219 05:53:33.120474 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" event={"ID":"a37d4335-7c06-4fa3-af51-6cfe6fb9a020","Type":"ContainerStarted","Data":"85c3a632b87baefd0f4635bc2948e782ad63514e78084b1f3a6e81fb5d16f7ff"} Feb 19 05:53:34 crc kubenswrapper[5012]: I0219 05:53:34.131898 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" event={"ID":"a37d4335-7c06-4fa3-af51-6cfe6fb9a020","Type":"ContainerStarted","Data":"298f5bb339ba7ac183681cae0f465a88f8842b79b87df2d108cd2a29aab059a2"} Feb 19 05:53:34 crc kubenswrapper[5012]: I0219 05:53:34.712246 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:53:34 crc kubenswrapper[5012]: E0219 05:53:34.712750 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:53:34 crc kubenswrapper[5012]: I0219 05:53:34.777079 5012 scope.go:117] "RemoveContainer" containerID="cfe7e53a61fb5256f22c4a39c4ac5b0bf7cc2f1ccf28f2709694c6b3715b8d0c" Feb 19 05:53:34 crc kubenswrapper[5012]: I0219 05:53:34.831485 5012 scope.go:117] "RemoveContainer" containerID="7e6d7c6e4279d09faf69cc8325c3a9419e59f879f7e638bdadc3e1a99dfe010e" Feb 19 05:53:34 crc kubenswrapper[5012]: I0219 05:53:34.896726 5012 scope.go:117] "RemoveContainer" containerID="cea9e8e15e555d9e359bdb9e094582010c0f5cb2424bf6d21370cbb196b19806" Feb 19 05:53:34 crc kubenswrapper[5012]: I0219 05:53:34.959178 5012 scope.go:117] "RemoveContainer" containerID="a20a059012a07fc06fff87153b7822f281e937cfbfdfbad5c4e4671c1d2bfb30" Feb 19 05:53:35 crc kubenswrapper[5012]: I0219 05:53:35.025082 5012 scope.go:117] "RemoveContainer" containerID="152353fb3f9bf0d9255bd600198a1803f9e2b42292b1e50815808d78b63cdb99" Feb 19 05:53:35 crc kubenswrapper[5012]: I0219 05:53:35.063322 5012 scope.go:117] "RemoveContainer" containerID="20962d8cd5b490b4c52f0881b3105ca6e34c9e56c96152f389a414e4e6b49d12" Feb 19 05:53:35 crc kubenswrapper[5012]: I0219 05:53:35.113014 5012 scope.go:117] "RemoveContainer" containerID="bc1e75b8122059977fabe9b750a293942be5f1e6a7daf5e75f1e50d40f43dd63" Feb 19 05:53:35 crc kubenswrapper[5012]: I0219 05:53:35.160380 5012 scope.go:117] "RemoveContainer" containerID="67bce0df8bf4cde6aebe2e02939680ed6fbf6f5f67dfee6a477ff8a83ddd570c" Feb 19 05:53:36 crc kubenswrapper[5012]: I0219 05:53:36.039520 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" podStartSLOduration=4.328510392 podStartE2EDuration="5.039498695s" podCreationTimestamp="2026-02-19 05:53:31 +0000 UTC" firstStartedPulling="2026-02-19 05:53:32.26685462 +0000 UTC m=+1708.300177229" lastFinishedPulling="2026-02-19 05:53:32.977842923 +0000 UTC m=+1709.011165532" observedRunningTime="2026-02-19 05:53:34.150916863 +0000 UTC m=+1710.184239432" watchObservedRunningTime="2026-02-19 05:53:36.039498695 +0000 UTC m=+1712.072821254" Feb 19 05:53:36 crc kubenswrapper[5012]: I0219 05:53:36.042502 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-w9g6v"] Feb 19 05:53:36 crc kubenswrapper[5012]: I0219 05:53:36.051235 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zf89d"] Feb 19 05:53:36 crc kubenswrapper[5012]: I0219 05:53:36.060838 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-w9g6v"] Feb 19 05:53:36 crc kubenswrapper[5012]: I0219 05:53:36.076619 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zf89d"] Feb 19 05:53:36 crc kubenswrapper[5012]: I0219 05:53:36.721547 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="555a6373-5cdf-490e-b6ea-b0fb55425d28" path="/var/lib/kubelet/pods/555a6373-5cdf-490e-b6ea-b0fb55425d28/volumes" Feb 19 05:53:36 crc kubenswrapper[5012]: I0219 05:53:36.722521 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be803869-4625-418d-bd39-bdbb4e6e0bfd" path="/var/lib/kubelet/pods/be803869-4625-418d-bd39-bdbb4e6e0bfd/volumes" Feb 19 05:53:48 crc kubenswrapper[5012]: I0219 05:53:48.703965 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:53:48 crc kubenswrapper[5012]: E0219 05:53:48.705034 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:53:51 crc kubenswrapper[5012]: I0219 05:53:51.045257 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-cdj57"] Feb 19 05:53:51 crc kubenswrapper[5012]: I0219 05:53:51.074849 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-cdj57"] Feb 19 05:53:52 crc kubenswrapper[5012]: I0219 05:53:52.043715 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-jzclm"] Feb 19 05:53:52 crc kubenswrapper[5012]: I0219 05:53:52.056491 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-jzclm"] Feb 19 05:53:52 crc kubenswrapper[5012]: I0219 05:53:52.715547 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89f14c4e-147e-4a05-a8d9-63b93aaad4a4" path="/var/lib/kubelet/pods/89f14c4e-147e-4a05-a8d9-63b93aaad4a4/volumes" Feb 19 05:53:52 crc kubenswrapper[5012]: I0219 05:53:52.716372 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a34a979c-9102-471f-9678-048fd5198cb8" path="/var/lib/kubelet/pods/a34a979c-9102-471f-9678-048fd5198cb8/volumes" Feb 19 05:53:56 crc kubenswrapper[5012]: I0219 05:53:56.041499 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-xj7dw"] Feb 19 05:53:56 crc kubenswrapper[5012]: I0219 05:53:56.053436 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-xj7dw"] Feb 19 05:53:56 crc kubenswrapper[5012]: I0219 05:53:56.726694 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b98c972c-b350-44a1-a7c5-028914fe7bfc" path="/var/lib/kubelet/pods/b98c972c-b350-44a1-a7c5-028914fe7bfc/volumes" Feb 19 05:54:02 crc kubenswrapper[5012]: I0219 05:54:02.703635 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:54:02 crc kubenswrapper[5012]: E0219 05:54:02.704315 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:54:04 crc kubenswrapper[5012]: I0219 05:54:04.060973 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-px7xk"] Feb 19 05:54:04 crc kubenswrapper[5012]: I0219 05:54:04.071779 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-px7xk"] Feb 19 05:54:04 crc kubenswrapper[5012]: I0219 05:54:04.724911 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="787f8a71-dee4-40d2-b33b-85bcfc58f921" path="/var/lib/kubelet/pods/787f8a71-dee4-40d2-b33b-85bcfc58f921/volumes" Feb 19 05:54:14 crc kubenswrapper[5012]: I0219 05:54:14.710517 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:54:14 crc kubenswrapper[5012]: E0219 05:54:14.711418 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:54:25 crc kubenswrapper[5012]: I0219 05:54:25.702710 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:54:25 crc kubenswrapper[5012]: E0219 05:54:25.703380 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:54:35 crc kubenswrapper[5012]: I0219 05:54:35.369596 5012 scope.go:117] "RemoveContainer" containerID="8dfd0224f4b707b6bfc0133d1f07ea378c585adcdbe5ef8ea62dd0f00fb98923" Feb 19 05:54:35 crc kubenswrapper[5012]: I0219 05:54:35.429524 5012 scope.go:117] "RemoveContainer" containerID="8322bcc6cc3c5b2d8222ae8137e7a8ab0b73bac7b8fa9b87cd91c71100844e13" Feb 19 05:54:35 crc kubenswrapper[5012]: I0219 05:54:35.486390 5012 scope.go:117] "RemoveContainer" containerID="8659190e8633f7b88664c6c7e44927faf89d76ab66a53b4530e433a52d8c9664" Feb 19 05:54:35 crc kubenswrapper[5012]: I0219 05:54:35.538612 5012 scope.go:117] "RemoveContainer" containerID="d0e335ec457cf8c772f55111337cf2d1aae49da15e75b237650c2e4a19efd926" Feb 19 05:54:35 crc kubenswrapper[5012]: I0219 05:54:35.592160 5012 scope.go:117] "RemoveContainer" containerID="c3b30cfc4d7788c5bf2800aec00271d7a398ee5903276843825107c74fa7f5b9" Feb 19 05:54:35 crc kubenswrapper[5012]: I0219 05:54:35.648161 5012 scope.go:117] "RemoveContainer" containerID="9a5f9edac057b3de1965c26aac0927e9eaced35943e1b07d9b0176cc162f7fc5" Feb 19 05:54:38 crc kubenswrapper[5012]: I0219 05:54:38.704288 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:54:38 crc kubenswrapper[5012]: E0219 05:54:38.705032 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:54:43 crc kubenswrapper[5012]: I0219 05:54:43.065399 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-b3d3-account-create-update-jv5jh"] Feb 19 05:54:43 crc kubenswrapper[5012]: I0219 05:54:43.081139 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-b3d3-account-create-update-jv5jh"] Feb 19 05:54:43 crc kubenswrapper[5012]: I0219 05:54:43.092809 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-fc68-account-create-update-tfrzr"] Feb 19 05:54:43 crc kubenswrapper[5012]: I0219 05:54:43.100247 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-fc68-account-create-update-tfrzr"] Feb 19 05:54:43 crc kubenswrapper[5012]: I0219 05:54:43.958279 5012 generic.go:334] "Generic (PLEG): container finished" podID="a37d4335-7c06-4fa3-af51-6cfe6fb9a020" containerID="298f5bb339ba7ac183681cae0f465a88f8842b79b87df2d108cd2a29aab059a2" exitCode=0 Feb 19 05:54:43 crc kubenswrapper[5012]: I0219 05:54:43.958510 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" event={"ID":"a37d4335-7c06-4fa3-af51-6cfe6fb9a020","Type":"ContainerDied","Data":"298f5bb339ba7ac183681cae0f465a88f8842b79b87df2d108cd2a29aab059a2"} Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.055525 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-vj27c"] Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.067102 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-9gsgt"] Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.079106 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-vj27c"] Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.088581 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a4a6-account-create-update-tz4l9"] Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.095428 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-j7vgh"] Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.101998 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-j7vgh"] Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.108536 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-9gsgt"] Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.115161 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a4a6-account-create-update-tz4l9"] Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.724754 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b1a4d80-a736-41c3-9157-c0a696c10eff" path="/var/lib/kubelet/pods/0b1a4d80-a736-41c3-9157-c0a696c10eff/volumes" Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.725580 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fc398d7-f426-420d-981c-6bda415a2ce0" path="/var/lib/kubelet/pods/2fc398d7-f426-420d-981c-6bda415a2ce0/volumes" Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.726299 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="768cc9af-66f9-4972-a2b4-a69b0fb15b3d" path="/var/lib/kubelet/pods/768cc9af-66f9-4972-a2b4-a69b0fb15b3d/volumes" Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.727091 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80e98ac0-3018-4566-95b3-2d2dfd3e234e" path="/var/lib/kubelet/pods/80e98ac0-3018-4566-95b3-2d2dfd3e234e/volumes" Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.728474 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd4d5a16-81ab-4336-99d5-570d83e4baaa" path="/var/lib/kubelet/pods/cd4d5a16-81ab-4336-99d5-570d83e4baaa/volumes" Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.729202 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efae98df-8f23-4e6b-bad0-f2c7a58fb86d" path="/var/lib/kubelet/pods/efae98df-8f23-4e6b-bad0-f2c7a58fb86d/volumes" Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.523265 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.570291 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7r84\" (UniqueName: \"kubernetes.io/projected/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-kube-api-access-d7r84\") pod \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.570496 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-inventory\") pod \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.570600 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-ssh-key-openstack-edpm-ipam\") pod \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.581346 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-kube-api-access-d7r84" (OuterVolumeSpecName: "kube-api-access-d7r84") pod "a37d4335-7c06-4fa3-af51-6cfe6fb9a020" (UID: "a37d4335-7c06-4fa3-af51-6cfe6fb9a020"). InnerVolumeSpecName "kube-api-access-d7r84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.607567 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-inventory" (OuterVolumeSpecName: "inventory") pod "a37d4335-7c06-4fa3-af51-6cfe6fb9a020" (UID: "a37d4335-7c06-4fa3-af51-6cfe6fb9a020"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.630291 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a37d4335-7c06-4fa3-af51-6cfe6fb9a020" (UID: "a37d4335-7c06-4fa3-af51-6cfe6fb9a020"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.674721 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.674775 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.674799 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7r84\" (UniqueName: \"kubernetes.io/projected/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-kube-api-access-d7r84\") on node \"crc\" DevicePath \"\"" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.025529 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" event={"ID":"a37d4335-7c06-4fa3-af51-6cfe6fb9a020","Type":"ContainerDied","Data":"85c3a632b87baefd0f4635bc2948e782ad63514e78084b1f3a6e81fb5d16f7ff"} Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.025826 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85c3a632b87baefd0f4635bc2948e782ad63514e78084b1f3a6e81fb5d16f7ff" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.025618 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.160540 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6"] Feb 19 05:54:46 crc kubenswrapper[5012]: E0219 05:54:46.161073 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37d4335-7c06-4fa3-af51-6cfe6fb9a020" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.161096 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37d4335-7c06-4fa3-af51-6cfe6fb9a020" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.161366 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37d4335-7c06-4fa3-af51-6cfe6fb9a020" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.162255 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.164514 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.164572 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.164590 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.165593 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.174037 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6"] Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.298562 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.298755 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6fjt\" (UniqueName: \"kubernetes.io/projected/cdccd552-e703-4d8d-86b4-ff481671527f-kube-api-access-b6fjt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.298892 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.401185 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.401298 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6fjt\" (UniqueName: \"kubernetes.io/projected/cdccd552-e703-4d8d-86b4-ff481671527f-kube-api-access-b6fjt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.401360 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.406657 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.407508 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.434988 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6fjt\" (UniqueName: \"kubernetes.io/projected/cdccd552-e703-4d8d-86b4-ff481671527f-kube-api-access-b6fjt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.508871 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.880760 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6"] Feb 19 05:54:47 crc kubenswrapper[5012]: I0219 05:54:47.038512 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" event={"ID":"cdccd552-e703-4d8d-86b4-ff481671527f","Type":"ContainerStarted","Data":"d31d06bb65a0505a8f6ea016de34ab00f00f1edb87ee9e48c8ef5ffe39a1a5e9"} Feb 19 05:54:48 crc kubenswrapper[5012]: I0219 05:54:48.060145 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" event={"ID":"cdccd552-e703-4d8d-86b4-ff481671527f","Type":"ContainerStarted","Data":"c4a36204435a7c333a565cd24f8b73764b976d3c9f94d39e5fc9e35c932d1d2c"} Feb 19 05:54:48 crc kubenswrapper[5012]: I0219 05:54:48.100868 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" podStartSLOduration=1.61343835 podStartE2EDuration="2.100848977s" podCreationTimestamp="2026-02-19 05:54:46 +0000 UTC" firstStartedPulling="2026-02-19 05:54:46.892287808 +0000 UTC m=+1782.925610377" lastFinishedPulling="2026-02-19 05:54:47.379698395 +0000 UTC m=+1783.413021004" observedRunningTime="2026-02-19 05:54:48.09078477 +0000 UTC m=+1784.124107339" watchObservedRunningTime="2026-02-19 05:54:48.100848977 +0000 UTC m=+1784.134171546" Feb 19 05:54:49 crc kubenswrapper[5012]: I0219 05:54:49.703183 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:54:50 crc kubenswrapper[5012]: I0219 05:54:50.088803 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"50740295b4ff1d8fcf9687906fffd0580ff7c4139e466c7a77580870ab679afe"} Feb 19 05:54:52 crc kubenswrapper[5012]: E0219 05:54:52.910873 5012 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdccd552_e703_4d8d_86b4_ff481671527f.slice/crio-c4a36204435a7c333a565cd24f8b73764b976d3c9f94d39e5fc9e35c932d1d2c.scope\": RecentStats: unable to find data in memory cache]" Feb 19 05:54:53 crc kubenswrapper[5012]: I0219 05:54:53.155921 5012 generic.go:334] "Generic (PLEG): container finished" podID="cdccd552-e703-4d8d-86b4-ff481671527f" containerID="c4a36204435a7c333a565cd24f8b73764b976d3c9f94d39e5fc9e35c932d1d2c" exitCode=0 Feb 19 05:54:53 crc kubenswrapper[5012]: I0219 05:54:53.156053 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" event={"ID":"cdccd552-e703-4d8d-86b4-ff481671527f","Type":"ContainerDied","Data":"c4a36204435a7c333a565cd24f8b73764b976d3c9f94d39e5fc9e35c932d1d2c"} Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.784753 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.846417 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6fjt\" (UniqueName: \"kubernetes.io/projected/cdccd552-e703-4d8d-86b4-ff481671527f-kube-api-access-b6fjt\") pod \"cdccd552-e703-4d8d-86b4-ff481671527f\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.846647 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-ssh-key-openstack-edpm-ipam\") pod \"cdccd552-e703-4d8d-86b4-ff481671527f\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.846711 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-inventory\") pod \"cdccd552-e703-4d8d-86b4-ff481671527f\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.868194 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdccd552-e703-4d8d-86b4-ff481671527f-kube-api-access-b6fjt" (OuterVolumeSpecName: "kube-api-access-b6fjt") pod "cdccd552-e703-4d8d-86b4-ff481671527f" (UID: "cdccd552-e703-4d8d-86b4-ff481671527f"). InnerVolumeSpecName "kube-api-access-b6fjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.883490 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cdccd552-e703-4d8d-86b4-ff481671527f" (UID: "cdccd552-e703-4d8d-86b4-ff481671527f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.904174 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-inventory" (OuterVolumeSpecName: "inventory") pod "cdccd552-e703-4d8d-86b4-ff481671527f" (UID: "cdccd552-e703-4d8d-86b4-ff481671527f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.948618 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6fjt\" (UniqueName: \"kubernetes.io/projected/cdccd552-e703-4d8d-86b4-ff481671527f-kube-api-access-b6fjt\") on node \"crc\" DevicePath \"\"" Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.948680 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.948697 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.237780 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" event={"ID":"cdccd552-e703-4d8d-86b4-ff481671527f","Type":"ContainerDied","Data":"d31d06bb65a0505a8f6ea016de34ab00f00f1edb87ee9e48c8ef5ffe39a1a5e9"} Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.237832 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d31d06bb65a0505a8f6ea016de34ab00f00f1edb87ee9e48c8ef5ffe39a1a5e9" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.239433 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.291462 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7"] Feb 19 05:54:55 crc kubenswrapper[5012]: E0219 05:54:55.292613 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdccd552-e703-4d8d-86b4-ff481671527f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.292639 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdccd552-e703-4d8d-86b4-ff481671527f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.292876 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdccd552-e703-4d8d-86b4-ff481671527f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.293846 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.297819 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.297991 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.298020 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.300370 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.326056 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7"] Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.359745 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.360338 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5dxz\" (UniqueName: \"kubernetes.io/projected/0037b322-99bb-4ae2-aba4-85ddcd8243ae-kube-api-access-b5dxz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.360495 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.461538 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.461680 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.461732 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5dxz\" (UniqueName: \"kubernetes.io/projected/0037b322-99bb-4ae2-aba4-85ddcd8243ae-kube-api-access-b5dxz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.475181 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.487028 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5dxz\" (UniqueName: \"kubernetes.io/projected/0037b322-99bb-4ae2-aba4-85ddcd8243ae-kube-api-access-b5dxz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.487482 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.629246 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:56 crc kubenswrapper[5012]: I0219 05:54:56.257746 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7"] Feb 19 05:54:57 crc kubenswrapper[5012]: I0219 05:54:57.267216 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" event={"ID":"0037b322-99bb-4ae2-aba4-85ddcd8243ae","Type":"ContainerStarted","Data":"fc664419fd06ca01d4b66c021c3502deae780162959a25fba1e04fbdb98da62a"} Feb 19 05:54:57 crc kubenswrapper[5012]: I0219 05:54:57.270064 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" event={"ID":"0037b322-99bb-4ae2-aba4-85ddcd8243ae","Type":"ContainerStarted","Data":"464052eadd097af96e9cb927005eb1d2b0c38df05bb8893ce205e2fbdb42a86d"} Feb 19 05:54:57 crc kubenswrapper[5012]: I0219 05:54:57.294348 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" podStartSLOduration=1.881208203 podStartE2EDuration="2.29431896s" podCreationTimestamp="2026-02-19 05:54:55 +0000 UTC" firstStartedPulling="2026-02-19 05:54:56.273779526 +0000 UTC m=+1792.307102135" lastFinishedPulling="2026-02-19 05:54:56.686890283 +0000 UTC m=+1792.720212892" observedRunningTime="2026-02-19 05:54:57.288171749 +0000 UTC m=+1793.321494328" watchObservedRunningTime="2026-02-19 05:54:57.29431896 +0000 UTC m=+1793.327641539" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.210494 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6m2dh"] Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.213792 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.227796 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6m2dh"] Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.310788 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-catalog-content\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.310982 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5srf\" (UniqueName: \"kubernetes.io/projected/93c01ce3-3353-4008-b521-c13b78700f14-kube-api-access-d5srf\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.311040 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-utilities\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.417157 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-catalog-content\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.417275 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5srf\" (UniqueName: \"kubernetes.io/projected/93c01ce3-3353-4008-b521-c13b78700f14-kube-api-access-d5srf\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.417356 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-utilities\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.417944 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-utilities\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.418175 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-catalog-content\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.450718 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5srf\" (UniqueName: \"kubernetes.io/projected/93c01ce3-3353-4008-b521-c13b78700f14-kube-api-access-d5srf\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.548586 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:16 crc kubenswrapper[5012]: I0219 05:55:16.111144 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6m2dh"] Feb 19 05:55:16 crc kubenswrapper[5012]: W0219 05:55:16.112881 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93c01ce3_3353_4008_b521_c13b78700f14.slice/crio-e6cf0ed98263ef58107b635a456c4463f2ad4e1b413edd3f453e2a6c64e1f798 WatchSource:0}: Error finding container e6cf0ed98263ef58107b635a456c4463f2ad4e1b413edd3f453e2a6c64e1f798: Status 404 returned error can't find the container with id e6cf0ed98263ef58107b635a456c4463f2ad4e1b413edd3f453e2a6c64e1f798 Feb 19 05:55:16 crc kubenswrapper[5012]: I0219 05:55:16.480007 5012 generic.go:334] "Generic (PLEG): container finished" podID="93c01ce3-3353-4008-b521-c13b78700f14" containerID="88705a5b47e877865905bfec0d79a37661c7afd39bd29b3b62dcb301a3a591e6" exitCode=0 Feb 19 05:55:16 crc kubenswrapper[5012]: I0219 05:55:16.480085 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m2dh" event={"ID":"93c01ce3-3353-4008-b521-c13b78700f14","Type":"ContainerDied","Data":"88705a5b47e877865905bfec0d79a37661c7afd39bd29b3b62dcb301a3a591e6"} Feb 19 05:55:16 crc kubenswrapper[5012]: I0219 05:55:16.480429 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m2dh" event={"ID":"93c01ce3-3353-4008-b521-c13b78700f14","Type":"ContainerStarted","Data":"e6cf0ed98263ef58107b635a456c4463f2ad4e1b413edd3f453e2a6c64e1f798"} Feb 19 05:55:17 crc kubenswrapper[5012]: I0219 05:55:17.498958 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m2dh" event={"ID":"93c01ce3-3353-4008-b521-c13b78700f14","Type":"ContainerStarted","Data":"f2c73daa7912b8b42ff12ed6bf21505d1239d1f38a626a18bd4a378076264990"} Feb 19 05:55:18 crc kubenswrapper[5012]: I0219 05:55:18.512105 5012 generic.go:334] "Generic (PLEG): container finished" podID="93c01ce3-3353-4008-b521-c13b78700f14" containerID="f2c73daa7912b8b42ff12ed6bf21505d1239d1f38a626a18bd4a378076264990" exitCode=0 Feb 19 05:55:18 crc kubenswrapper[5012]: I0219 05:55:18.512172 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m2dh" event={"ID":"93c01ce3-3353-4008-b521-c13b78700f14","Type":"ContainerDied","Data":"f2c73daa7912b8b42ff12ed6bf21505d1239d1f38a626a18bd4a378076264990"} Feb 19 05:55:19 crc kubenswrapper[5012]: I0219 05:55:19.065835 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vz94t"] Feb 19 05:55:19 crc kubenswrapper[5012]: I0219 05:55:19.074010 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vz94t"] Feb 19 05:55:19 crc kubenswrapper[5012]: I0219 05:55:19.525543 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m2dh" event={"ID":"93c01ce3-3353-4008-b521-c13b78700f14","Type":"ContainerStarted","Data":"7a7a28b9019ae634e7419610ad5d6e6779acece549fd96ddb1633a5dbbf4b985"} Feb 19 05:55:19 crc kubenswrapper[5012]: I0219 05:55:19.550763 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6m2dh" podStartSLOduration=2.130497702 podStartE2EDuration="4.550743789s" podCreationTimestamp="2026-02-19 05:55:15 +0000 UTC" firstStartedPulling="2026-02-19 05:55:16.48255306 +0000 UTC m=+1812.515875629" lastFinishedPulling="2026-02-19 05:55:18.902799137 +0000 UTC m=+1814.936121716" observedRunningTime="2026-02-19 05:55:19.543791207 +0000 UTC m=+1815.577113786" watchObservedRunningTime="2026-02-19 05:55:19.550743789 +0000 UTC m=+1815.584066358" Feb 19 05:55:20 crc kubenswrapper[5012]: I0219 05:55:20.728630 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f256783-305c-4782-81c0-5aed8867b7e3" path="/var/lib/kubelet/pods/3f256783-305c-4782-81c0-5aed8867b7e3/volumes" Feb 19 05:55:25 crc kubenswrapper[5012]: I0219 05:55:25.549740 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:25 crc kubenswrapper[5012]: I0219 05:55:25.552493 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:25 crc kubenswrapper[5012]: I0219 05:55:25.633889 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:25 crc kubenswrapper[5012]: I0219 05:55:25.726358 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:25 crc kubenswrapper[5012]: I0219 05:55:25.876952 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6m2dh"] Feb 19 05:55:27 crc kubenswrapper[5012]: I0219 05:55:27.624854 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6m2dh" podUID="93c01ce3-3353-4008-b521-c13b78700f14" containerName="registry-server" containerID="cri-o://7a7a28b9019ae634e7419610ad5d6e6779acece549fd96ddb1633a5dbbf4b985" gracePeriod=2 Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.636359 5012 generic.go:334] "Generic (PLEG): container finished" podID="93c01ce3-3353-4008-b521-c13b78700f14" containerID="7a7a28b9019ae634e7419610ad5d6e6779acece549fd96ddb1633a5dbbf4b985" exitCode=0 Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.636458 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m2dh" event={"ID":"93c01ce3-3353-4008-b521-c13b78700f14","Type":"ContainerDied","Data":"7a7a28b9019ae634e7419610ad5d6e6779acece549fd96ddb1633a5dbbf4b985"} Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.636759 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m2dh" event={"ID":"93c01ce3-3353-4008-b521-c13b78700f14","Type":"ContainerDied","Data":"e6cf0ed98263ef58107b635a456c4463f2ad4e1b413edd3f453e2a6c64e1f798"} Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.636783 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6cf0ed98263ef58107b635a456c4463f2ad4e1b413edd3f453e2a6c64e1f798" Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.735806 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.866376 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5srf\" (UniqueName: \"kubernetes.io/projected/93c01ce3-3353-4008-b521-c13b78700f14-kube-api-access-d5srf\") pod \"93c01ce3-3353-4008-b521-c13b78700f14\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.866809 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-utilities\") pod \"93c01ce3-3353-4008-b521-c13b78700f14\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.867116 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-catalog-content\") pod \"93c01ce3-3353-4008-b521-c13b78700f14\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.867789 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-utilities" (OuterVolumeSpecName: "utilities") pod "93c01ce3-3353-4008-b521-c13b78700f14" (UID: "93c01ce3-3353-4008-b521-c13b78700f14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.875463 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c01ce3-3353-4008-b521-c13b78700f14-kube-api-access-d5srf" (OuterVolumeSpecName: "kube-api-access-d5srf") pod "93c01ce3-3353-4008-b521-c13b78700f14" (UID: "93c01ce3-3353-4008-b521-c13b78700f14"). InnerVolumeSpecName "kube-api-access-d5srf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.921342 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93c01ce3-3353-4008-b521-c13b78700f14" (UID: "93c01ce3-3353-4008-b521-c13b78700f14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.970470 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.970517 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5srf\" (UniqueName: \"kubernetes.io/projected/93c01ce3-3353-4008-b521-c13b78700f14-kube-api-access-d5srf\") on node \"crc\" DevicePath \"\"" Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.970534 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:55:29 crc kubenswrapper[5012]: I0219 05:55:29.645088 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:29 crc kubenswrapper[5012]: I0219 05:55:29.691989 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6m2dh"] Feb 19 05:55:29 crc kubenswrapper[5012]: I0219 05:55:29.705796 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6m2dh"] Feb 19 05:55:30 crc kubenswrapper[5012]: I0219 05:55:30.727165 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c01ce3-3353-4008-b521-c13b78700f14" path="/var/lib/kubelet/pods/93c01ce3-3353-4008-b521-c13b78700f14/volumes" Feb 19 05:55:35 crc kubenswrapper[5012]: I0219 05:55:35.882477 5012 scope.go:117] "RemoveContainer" containerID="2adf806f0d4859a0678f70c1d3e40183b96910ec8d7d4b4dd3e550a8e559d848" Feb 19 05:55:35 crc kubenswrapper[5012]: I0219 05:55:35.925999 5012 scope.go:117] "RemoveContainer" containerID="32216c19b01878e03cf37157a913c36ac04ed37d7c518d5811bfc0096e2fc84b" Feb 19 05:55:35 crc kubenswrapper[5012]: I0219 05:55:35.994429 5012 scope.go:117] "RemoveContainer" containerID="43cb426b1d824281e78b0291231050744f408cc09f73ab56e4ae893d291e9f7e" Feb 19 05:55:36 crc kubenswrapper[5012]: I0219 05:55:36.053606 5012 scope.go:117] "RemoveContainer" containerID="48110d1bed52f125950a67152ee45f991adbabd56a8a45d17e8316bb03423870" Feb 19 05:55:36 crc kubenswrapper[5012]: I0219 05:55:36.097838 5012 scope.go:117] "RemoveContainer" containerID="b0ed53407a3cb3810cc4f0ec6ea8d71443cb0203ae2152d5e770b7f505f82370" Feb 19 05:55:36 crc kubenswrapper[5012]: I0219 05:55:36.165061 5012 scope.go:117] "RemoveContainer" containerID="db7dcb78edaee0fd0bebd3da354f9bac23c709f6cc5c4054736fd0aaea637cae" Feb 19 05:55:36 crc kubenswrapper[5012]: I0219 05:55:36.194560 5012 scope.go:117] "RemoveContainer" containerID="7b6605dba53e000181057a053dcadb95742b096a45a5fa3c7a87f8e866bb1bd9" Feb 19 05:55:37 crc kubenswrapper[5012]: I0219 05:55:37.744183 5012 generic.go:334] "Generic (PLEG): container finished" podID="0037b322-99bb-4ae2-aba4-85ddcd8243ae" containerID="fc664419fd06ca01d4b66c021c3502deae780162959a25fba1e04fbdb98da62a" exitCode=0 Feb 19 05:55:37 crc kubenswrapper[5012]: I0219 05:55:37.744280 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" event={"ID":"0037b322-99bb-4ae2-aba4-85ddcd8243ae","Type":"ContainerDied","Data":"fc664419fd06ca01d4b66c021c3502deae780162959a25fba1e04fbdb98da62a"} Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.301673 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.418194 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-ssh-key-openstack-edpm-ipam\") pod \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.418570 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5dxz\" (UniqueName: \"kubernetes.io/projected/0037b322-99bb-4ae2-aba4-85ddcd8243ae-kube-api-access-b5dxz\") pod \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.418616 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-inventory\") pod \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.425199 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0037b322-99bb-4ae2-aba4-85ddcd8243ae-kube-api-access-b5dxz" (OuterVolumeSpecName: "kube-api-access-b5dxz") pod "0037b322-99bb-4ae2-aba4-85ddcd8243ae" (UID: "0037b322-99bb-4ae2-aba4-85ddcd8243ae"). InnerVolumeSpecName "kube-api-access-b5dxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.454143 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-inventory" (OuterVolumeSpecName: "inventory") pod "0037b322-99bb-4ae2-aba4-85ddcd8243ae" (UID: "0037b322-99bb-4ae2-aba4-85ddcd8243ae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.483574 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0037b322-99bb-4ae2-aba4-85ddcd8243ae" (UID: "0037b322-99bb-4ae2-aba4-85ddcd8243ae"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.521785 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.521828 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5dxz\" (UniqueName: \"kubernetes.io/projected/0037b322-99bb-4ae2-aba4-85ddcd8243ae-kube-api-access-b5dxz\") on node \"crc\" DevicePath \"\"" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.521841 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.771776 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" event={"ID":"0037b322-99bb-4ae2-aba4-85ddcd8243ae","Type":"ContainerDied","Data":"464052eadd097af96e9cb927005eb1d2b0c38df05bb8893ce205e2fbdb42a86d"} Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.771834 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="464052eadd097af96e9cb927005eb1d2b0c38df05bb8893ce205e2fbdb42a86d" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.771867 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.916790 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db"] Feb 19 05:55:39 crc kubenswrapper[5012]: E0219 05:55:39.917648 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0037b322-99bb-4ae2-aba4-85ddcd8243ae" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.917686 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0037b322-99bb-4ae2-aba4-85ddcd8243ae" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:55:39 crc kubenswrapper[5012]: E0219 05:55:39.917715 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c01ce3-3353-4008-b521-c13b78700f14" containerName="registry-server" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.917731 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c01ce3-3353-4008-b521-c13b78700f14" containerName="registry-server" Feb 19 05:55:39 crc kubenswrapper[5012]: E0219 05:55:39.917770 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c01ce3-3353-4008-b521-c13b78700f14" containerName="extract-utilities" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.917783 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c01ce3-3353-4008-b521-c13b78700f14" containerName="extract-utilities" Feb 19 05:55:39 crc kubenswrapper[5012]: E0219 05:55:39.917827 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c01ce3-3353-4008-b521-c13b78700f14" containerName="extract-content" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.917841 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c01ce3-3353-4008-b521-c13b78700f14" containerName="extract-content" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.918183 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0037b322-99bb-4ae2-aba4-85ddcd8243ae" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.918251 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c01ce3-3353-4008-b521-c13b78700f14" containerName="registry-server" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.919451 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.921812 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.921820 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.923003 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.923488 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.932254 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.932540 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.932605 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45xlf\" (UniqueName: \"kubernetes.io/projected/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-kube-api-access-45xlf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.935605 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db"] Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.035007 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.035624 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.035827 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45xlf\" (UniqueName: \"kubernetes.io/projected/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-kube-api-access-45xlf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.038666 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.039352 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.054735 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45xlf\" (UniqueName: \"kubernetes.io/projected/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-kube-api-access-45xlf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.249159 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.854105 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.858520 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db"] Feb 19 05:55:41 crc kubenswrapper[5012]: I0219 05:55:41.800778 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" event={"ID":"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3","Type":"ContainerStarted","Data":"bc05575ff0cfed5776b9bbb64af2657cd2088231709e3bde4bcd64e3805d965c"} Feb 19 05:55:41 crc kubenswrapper[5012]: I0219 05:55:41.801149 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" event={"ID":"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3","Type":"ContainerStarted","Data":"15f3c2beb4606524fdb23e93928af3f16cc228ec4fea4976e2a4bd58c03f5a59"} Feb 19 05:55:41 crc kubenswrapper[5012]: I0219 05:55:41.833061 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" podStartSLOduration=2.384688291 podStartE2EDuration="2.833030735s" podCreationTimestamp="2026-02-19 05:55:39 +0000 UTC" firstStartedPulling="2026-02-19 05:55:40.853737991 +0000 UTC m=+1836.887060570" lastFinishedPulling="2026-02-19 05:55:41.302080405 +0000 UTC m=+1837.335403014" observedRunningTime="2026-02-19 05:55:41.829273322 +0000 UTC m=+1837.862595921" watchObservedRunningTime="2026-02-19 05:55:41.833030735 +0000 UTC m=+1837.866353344" Feb 19 05:55:43 crc kubenswrapper[5012]: I0219 05:55:43.066719 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-nr45z"] Feb 19 05:55:43 crc kubenswrapper[5012]: I0219 05:55:43.077864 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-nr45z"] Feb 19 05:55:44 crc kubenswrapper[5012]: I0219 05:55:44.721723 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70ce9757-cdf1-4864-95ad-9d25fb9830a9" path="/var/lib/kubelet/pods/70ce9757-cdf1-4864-95ad-9d25fb9830a9/volumes" Feb 19 05:55:47 crc kubenswrapper[5012]: I0219 05:55:47.038137 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nbn8z"] Feb 19 05:55:47 crc kubenswrapper[5012]: I0219 05:55:47.057709 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nbn8z"] Feb 19 05:55:48 crc kubenswrapper[5012]: I0219 05:55:48.724060 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fc8fbb1-0e37-419f-86e0-6ce8db99225d" path="/var/lib/kubelet/pods/7fc8fbb1-0e37-419f-86e0-6ce8db99225d/volumes" Feb 19 05:56:26 crc kubenswrapper[5012]: I0219 05:56:26.046099 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-4t5r4"] Feb 19 05:56:26 crc kubenswrapper[5012]: I0219 05:56:26.063896 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-4t5r4"] Feb 19 05:56:26 crc kubenswrapper[5012]: I0219 05:56:26.728564 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f597fc0f-7407-4f05-916c-70f7a3f145ec" path="/var/lib/kubelet/pods/f597fc0f-7407-4f05-916c-70f7a3f145ec/volumes" Feb 19 05:56:35 crc kubenswrapper[5012]: I0219 05:56:35.446149 5012 generic.go:334] "Generic (PLEG): container finished" podID="8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3" containerID="bc05575ff0cfed5776b9bbb64af2657cd2088231709e3bde4bcd64e3805d965c" exitCode=0 Feb 19 05:56:35 crc kubenswrapper[5012]: I0219 05:56:35.446272 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" event={"ID":"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3","Type":"ContainerDied","Data":"bc05575ff0cfed5776b9bbb64af2657cd2088231709e3bde4bcd64e3805d965c"} Feb 19 05:56:36 crc kubenswrapper[5012]: I0219 05:56:36.430783 5012 scope.go:117] "RemoveContainer" containerID="0b98797f9f7e97071d4699ed1c59c23ddac69aff6fb8708f48bdc42a56a8cf34" Feb 19 05:56:36 crc kubenswrapper[5012]: I0219 05:56:36.516069 5012 scope.go:117] "RemoveContainer" containerID="021fb32c8f118be6cb115c199b5bccac76ab7b25e96dc7239f4fa322280c2c3c" Feb 19 05:56:36 crc kubenswrapper[5012]: I0219 05:56:36.564183 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-59bfbf7475-v98h9" podUID="4c9aa274-240d-4d50-b38a-754dd493f351" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 19 05:56:36 crc kubenswrapper[5012]: I0219 05:56:36.593201 5012 scope.go:117] "RemoveContainer" containerID="9d9ddb4f57f745aaa08f8b6e7a9a59d578aaf776b154c4fb9be135f3d48b048d" Feb 19 05:56:36 crc kubenswrapper[5012]: I0219 05:56:36.975844 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.140710 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45xlf\" (UniqueName: \"kubernetes.io/projected/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-kube-api-access-45xlf\") pod \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.140799 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-inventory\") pod \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.140911 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-ssh-key-openstack-edpm-ipam\") pod \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.148145 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-kube-api-access-45xlf" (OuterVolumeSpecName: "kube-api-access-45xlf") pod "8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3" (UID: "8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3"). InnerVolumeSpecName "kube-api-access-45xlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.168723 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3" (UID: "8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.177788 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-inventory" (OuterVolumeSpecName: "inventory") pod "8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3" (UID: "8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.244181 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45xlf\" (UniqueName: \"kubernetes.io/projected/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-kube-api-access-45xlf\") on node \"crc\" DevicePath \"\"" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.244233 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.244252 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.482106 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" event={"ID":"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3","Type":"ContainerDied","Data":"15f3c2beb4606524fdb23e93928af3f16cc228ec4fea4976e2a4bd58c03f5a59"} Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.482168 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15f3c2beb4606524fdb23e93928af3f16cc228ec4fea4976e2a4bd58c03f5a59" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.482224 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.607939 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9rlns"] Feb 19 05:56:37 crc kubenswrapper[5012]: E0219 05:56:37.608857 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.608880 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.609523 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.610567 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.612733 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.613300 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.613647 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.613843 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.637015 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9rlns"] Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.757643 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.757759 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.757905 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl2vb\" (UniqueName: \"kubernetes.io/projected/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-kube-api-access-zl2vb\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.860241 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.860354 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.860472 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl2vb\" (UniqueName: \"kubernetes.io/projected/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-kube-api-access-zl2vb\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.866137 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.874334 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.892679 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl2vb\" (UniqueName: \"kubernetes.io/projected/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-kube-api-access-zl2vb\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.940160 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:38 crc kubenswrapper[5012]: I0219 05:56:38.625658 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9rlns"] Feb 19 05:56:38 crc kubenswrapper[5012]: W0219 05:56:38.631483 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7c29e8e_a085_4dcc_8dbf_7fa1f971a4dc.slice/crio-4ebc58fe5109cb0c462c6d1a9d43129817e32a3c068ba09c30dc8de28f4564e1 WatchSource:0}: Error finding container 4ebc58fe5109cb0c462c6d1a9d43129817e32a3c068ba09c30dc8de28f4564e1: Status 404 returned error can't find the container with id 4ebc58fe5109cb0c462c6d1a9d43129817e32a3c068ba09c30dc8de28f4564e1 Feb 19 05:56:39 crc kubenswrapper[5012]: I0219 05:56:39.506654 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" event={"ID":"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc","Type":"ContainerStarted","Data":"0457ea42f5ff1d0e68bcd711d97118c2a58a9eacf2c96d3ae743704c8fa2175b"} Feb 19 05:56:39 crc kubenswrapper[5012]: I0219 05:56:39.506970 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" event={"ID":"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc","Type":"ContainerStarted","Data":"4ebc58fe5109cb0c462c6d1a9d43129817e32a3c068ba09c30dc8de28f4564e1"} Feb 19 05:56:39 crc kubenswrapper[5012]: I0219 05:56:39.543498 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" podStartSLOduration=2.055351882 podStartE2EDuration="2.543480426s" podCreationTimestamp="2026-02-19 05:56:37 +0000 UTC" firstStartedPulling="2026-02-19 05:56:38.638754719 +0000 UTC m=+1894.672077328" lastFinishedPulling="2026-02-19 05:56:39.126883293 +0000 UTC m=+1895.160205872" observedRunningTime="2026-02-19 05:56:39.527295817 +0000 UTC m=+1895.560618406" watchObservedRunningTime="2026-02-19 05:56:39.543480426 +0000 UTC m=+1895.576802995" Feb 19 05:56:47 crc kubenswrapper[5012]: I0219 05:56:47.626600 5012 generic.go:334] "Generic (PLEG): container finished" podID="f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc" containerID="0457ea42f5ff1d0e68bcd711d97118c2a58a9eacf2c96d3ae743704c8fa2175b" exitCode=0 Feb 19 05:56:47 crc kubenswrapper[5012]: I0219 05:56:47.626730 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" event={"ID":"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc","Type":"ContainerDied","Data":"0457ea42f5ff1d0e68bcd711d97118c2a58a9eacf2c96d3ae743704c8fa2175b"} Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.106567 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.216033 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-ssh-key-openstack-edpm-ipam\") pod \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.216157 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-inventory-0\") pod \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.217066 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl2vb\" (UniqueName: \"kubernetes.io/projected/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-kube-api-access-zl2vb\") pod \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.224932 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-kube-api-access-zl2vb" (OuterVolumeSpecName: "kube-api-access-zl2vb") pod "f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc" (UID: "f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc"). InnerVolumeSpecName "kube-api-access-zl2vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.263729 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc" (UID: "f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.270916 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc" (UID: "f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.319091 5012 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.319137 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl2vb\" (UniqueName: \"kubernetes.io/projected/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-kube-api-access-zl2vb\") on node \"crc\" DevicePath \"\"" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.319159 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.658292 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" event={"ID":"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc","Type":"ContainerDied","Data":"4ebc58fe5109cb0c462c6d1a9d43129817e32a3c068ba09c30dc8de28f4564e1"} Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.658384 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ebc58fe5109cb0c462c6d1a9d43129817e32a3c068ba09c30dc8de28f4564e1" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.658442 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.800426 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl"] Feb 19 05:56:49 crc kubenswrapper[5012]: E0219 05:56:49.801072 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc" containerName="ssh-known-hosts-edpm-deployment" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.801103 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc" containerName="ssh-known-hosts-edpm-deployment" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.801571 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc" containerName="ssh-known-hosts-edpm-deployment" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.802760 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.808442 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl"] Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.809596 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.809923 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.810114 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.810290 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.935574 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.936000 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxxv9\" (UniqueName: \"kubernetes.io/projected/86b984ed-bd52-4348-9415-dccff4a0e1a4-kube-api-access-gxxv9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.936080 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.038201 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxxv9\" (UniqueName: \"kubernetes.io/projected/86b984ed-bd52-4348-9415-dccff4a0e1a4-kube-api-access-gxxv9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.038407 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.038672 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.046067 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.052226 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.058063 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxxv9\" (UniqueName: \"kubernetes.io/projected/86b984ed-bd52-4348-9415-dccff4a0e1a4-kube-api-access-gxxv9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.143985 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.533748 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl"] Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.670092 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" event={"ID":"86b984ed-bd52-4348-9415-dccff4a0e1a4","Type":"ContainerStarted","Data":"0beb898611f4ed15a0cd19f9789f1a08d1d7d38c2e8a2aea667948b1e8299e92"} Feb 19 05:56:51 crc kubenswrapper[5012]: I0219 05:56:51.684900 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" event={"ID":"86b984ed-bd52-4348-9415-dccff4a0e1a4","Type":"ContainerStarted","Data":"004667b1fced0c38d1f3741b0ac4b7f40b4fff628ed0873ae170dd8144935811"} Feb 19 05:56:51 crc kubenswrapper[5012]: I0219 05:56:51.742381 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" podStartSLOduration=2.365669061 podStartE2EDuration="2.74235108s" podCreationTimestamp="2026-02-19 05:56:49 +0000 UTC" firstStartedPulling="2026-02-19 05:56:50.538435073 +0000 UTC m=+1906.571757672" lastFinishedPulling="2026-02-19 05:56:50.915117122 +0000 UTC m=+1906.948439691" observedRunningTime="2026-02-19 05:56:51.732270562 +0000 UTC m=+1907.765593131" watchObservedRunningTime="2026-02-19 05:56:51.74235108 +0000 UTC m=+1907.775673669" Feb 19 05:56:59 crc kubenswrapper[5012]: I0219 05:56:59.784130 5012 generic.go:334] "Generic (PLEG): container finished" podID="86b984ed-bd52-4348-9415-dccff4a0e1a4" containerID="004667b1fced0c38d1f3741b0ac4b7f40b4fff628ed0873ae170dd8144935811" exitCode=0 Feb 19 05:56:59 crc kubenswrapper[5012]: I0219 05:56:59.784266 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" event={"ID":"86b984ed-bd52-4348-9415-dccff4a0e1a4","Type":"ContainerDied","Data":"004667b1fced0c38d1f3741b0ac4b7f40b4fff628ed0873ae170dd8144935811"} Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.249874 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.302974 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-inventory\") pod \"86b984ed-bd52-4348-9415-dccff4a0e1a4\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.303180 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxxv9\" (UniqueName: \"kubernetes.io/projected/86b984ed-bd52-4348-9415-dccff4a0e1a4-kube-api-access-gxxv9\") pod \"86b984ed-bd52-4348-9415-dccff4a0e1a4\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.303225 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-ssh-key-openstack-edpm-ipam\") pod \"86b984ed-bd52-4348-9415-dccff4a0e1a4\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.310979 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b984ed-bd52-4348-9415-dccff4a0e1a4-kube-api-access-gxxv9" (OuterVolumeSpecName: "kube-api-access-gxxv9") pod "86b984ed-bd52-4348-9415-dccff4a0e1a4" (UID: "86b984ed-bd52-4348-9415-dccff4a0e1a4"). InnerVolumeSpecName "kube-api-access-gxxv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.332883 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-inventory" (OuterVolumeSpecName: "inventory") pod "86b984ed-bd52-4348-9415-dccff4a0e1a4" (UID: "86b984ed-bd52-4348-9415-dccff4a0e1a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.333520 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "86b984ed-bd52-4348-9415-dccff4a0e1a4" (UID: "86b984ed-bd52-4348-9415-dccff4a0e1a4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.406426 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.406465 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxxv9\" (UniqueName: \"kubernetes.io/projected/86b984ed-bd52-4348-9415-dccff4a0e1a4-kube-api-access-gxxv9\") on node \"crc\" DevicePath \"\"" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.406479 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.809368 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" event={"ID":"86b984ed-bd52-4348-9415-dccff4a0e1a4","Type":"ContainerDied","Data":"0beb898611f4ed15a0cd19f9789f1a08d1d7d38c2e8a2aea667948b1e8299e92"} Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.809430 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0beb898611f4ed15a0cd19f9789f1a08d1d7d38c2e8a2aea667948b1e8299e92" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.809431 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.908463 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs"] Feb 19 05:57:01 crc kubenswrapper[5012]: E0219 05:57:01.909140 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b984ed-bd52-4348-9415-dccff4a0e1a4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.909160 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b984ed-bd52-4348-9415-dccff4a0e1a4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.909518 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="86b984ed-bd52-4348-9415-dccff4a0e1a4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.910255 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.912452 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.912787 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.912852 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.913162 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.929908 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs"] Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.019970 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pqjv\" (UniqueName: \"kubernetes.io/projected/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-kube-api-access-7pqjv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.020057 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.020572 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.123269 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pqjv\" (UniqueName: \"kubernetes.io/projected/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-kube-api-access-7pqjv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.123420 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.123742 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.132066 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.141360 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.145729 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pqjv\" (UniqueName: \"kubernetes.io/projected/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-kube-api-access-7pqjv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.237590 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.917784 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs"] Feb 19 05:57:03 crc kubenswrapper[5012]: I0219 05:57:03.843885 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" event={"ID":"464de984-0dd6-4c4d-aed3-afbf84e0cdcf","Type":"ContainerStarted","Data":"9a4a0914747cd2de20603a5e64c71f5edb88e8b126aebbddab0dc4bad3dbf103"} Feb 19 05:57:03 crc kubenswrapper[5012]: I0219 05:57:03.844662 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" event={"ID":"464de984-0dd6-4c4d-aed3-afbf84e0cdcf","Type":"ContainerStarted","Data":"3a6afe214b6ff21a320431d18029a8b3ca770fd5e32e700ac909bbecfbfc0c9b"} Feb 19 05:57:03 crc kubenswrapper[5012]: I0219 05:57:03.881817 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" podStartSLOduration=2.421210464 podStartE2EDuration="2.88179754s" podCreationTimestamp="2026-02-19 05:57:01 +0000 UTC" firstStartedPulling="2026-02-19 05:57:02.922858257 +0000 UTC m=+1918.956180866" lastFinishedPulling="2026-02-19 05:57:03.383445333 +0000 UTC m=+1919.416767942" observedRunningTime="2026-02-19 05:57:03.873927136 +0000 UTC m=+1919.907249745" watchObservedRunningTime="2026-02-19 05:57:03.88179754 +0000 UTC m=+1919.915120139" Feb 19 05:57:13 crc kubenswrapper[5012]: I0219 05:57:13.982891 5012 generic.go:334] "Generic (PLEG): container finished" podID="464de984-0dd6-4c4d-aed3-afbf84e0cdcf" containerID="9a4a0914747cd2de20603a5e64c71f5edb88e8b126aebbddab0dc4bad3dbf103" exitCode=0 Feb 19 05:57:13 crc kubenswrapper[5012]: I0219 05:57:13.983028 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" event={"ID":"464de984-0dd6-4c4d-aed3-afbf84e0cdcf","Type":"ContainerDied","Data":"9a4a0914747cd2de20603a5e64c71f5edb88e8b126aebbddab0dc4bad3dbf103"} Feb 19 05:57:14 crc kubenswrapper[5012]: I0219 05:57:14.430616 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:57:14 crc kubenswrapper[5012]: I0219 05:57:14.430666 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.519249 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.558851 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-ssh-key-openstack-edpm-ipam\") pod \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.559058 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pqjv\" (UniqueName: \"kubernetes.io/projected/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-kube-api-access-7pqjv\") pod \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.559148 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-inventory\") pod \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.571667 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-kube-api-access-7pqjv" (OuterVolumeSpecName: "kube-api-access-7pqjv") pod "464de984-0dd6-4c4d-aed3-afbf84e0cdcf" (UID: "464de984-0dd6-4c4d-aed3-afbf84e0cdcf"). InnerVolumeSpecName "kube-api-access-7pqjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.615283 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-inventory" (OuterVolumeSpecName: "inventory") pod "464de984-0dd6-4c4d-aed3-afbf84e0cdcf" (UID: "464de984-0dd6-4c4d-aed3-afbf84e0cdcf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.616286 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "464de984-0dd6-4c4d-aed3-afbf84e0cdcf" (UID: "464de984-0dd6-4c4d-aed3-afbf84e0cdcf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.660719 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.660758 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.660770 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pqjv\" (UniqueName: \"kubernetes.io/projected/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-kube-api-access-7pqjv\") on node \"crc\" DevicePath \"\"" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.024827 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" event={"ID":"464de984-0dd6-4c4d-aed3-afbf84e0cdcf","Type":"ContainerDied","Data":"3a6afe214b6ff21a320431d18029a8b3ca770fd5e32e700ac909bbecfbfc0c9b"} Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.025192 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a6afe214b6ff21a320431d18029a8b3ca770fd5e32e700ac909bbecfbfc0c9b" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.024895 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.139548 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5"] Feb 19 05:57:16 crc kubenswrapper[5012]: E0219 05:57:16.139983 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464de984-0dd6-4c4d-aed3-afbf84e0cdcf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.140004 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="464de984-0dd6-4c4d-aed3-afbf84e0cdcf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.140222 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="464de984-0dd6-4c4d-aed3-afbf84e0cdcf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.141002 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.143367 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.144668 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.145672 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.146050 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.146483 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.146494 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.147117 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.147682 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.170254 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5"] Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174223 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174313 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174362 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174398 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174474 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf9mg\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-kube-api-access-zf9mg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174537 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174564 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174589 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174609 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174632 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174677 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174714 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174751 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174783 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276488 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276531 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276567 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276588 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276613 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276668 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276702 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276737 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276768 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276794 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276836 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276875 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276911 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276980 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf9mg\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-kube-api-access-zf9mg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.281105 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.281927 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.281945 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.282445 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.283065 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.283790 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.284791 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.285493 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.285667 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.285724 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.285786 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.287103 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.290290 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.296335 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf9mg\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-kube-api-access-zf9mg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.459998 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.850834 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5"] Feb 19 05:57:17 crc kubenswrapper[5012]: I0219 05:57:17.036848 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" event={"ID":"d869003b-7b03-4a8b-9f9c-73ca0ec4f359","Type":"ContainerStarted","Data":"c79daf974f33d700f2f0838eecbe85e8cd1c1c0b3e0d9db46ea76aecfbdd9d4f"} Feb 19 05:57:18 crc kubenswrapper[5012]: I0219 05:57:18.051183 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" event={"ID":"d869003b-7b03-4a8b-9f9c-73ca0ec4f359","Type":"ContainerStarted","Data":"3083a62009353315b9fc731e7060bf2fdf582bb6651774b6421dccef77849ffe"} Feb 19 05:57:18 crc kubenswrapper[5012]: I0219 05:57:18.084008 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" podStartSLOduration=1.603627669 podStartE2EDuration="2.083987551s" podCreationTimestamp="2026-02-19 05:57:16 +0000 UTC" firstStartedPulling="2026-02-19 05:57:16.865277621 +0000 UTC m=+1932.898600190" lastFinishedPulling="2026-02-19 05:57:17.345637463 +0000 UTC m=+1933.378960072" observedRunningTime="2026-02-19 05:57:18.071423482 +0000 UTC m=+1934.104746051" watchObservedRunningTime="2026-02-19 05:57:18.083987551 +0000 UTC m=+1934.117310120" Feb 19 05:57:44 crc kubenswrapper[5012]: I0219 05:57:44.430642 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:57:44 crc kubenswrapper[5012]: I0219 05:57:44.431553 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:57:58 crc kubenswrapper[5012]: I0219 05:57:58.494137 5012 generic.go:334] "Generic (PLEG): container finished" podID="d869003b-7b03-4a8b-9f9c-73ca0ec4f359" containerID="3083a62009353315b9fc731e7060bf2fdf582bb6651774b6421dccef77849ffe" exitCode=0 Feb 19 05:57:58 crc kubenswrapper[5012]: I0219 05:57:58.494260 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" event={"ID":"d869003b-7b03-4a8b-9f9c-73ca0ec4f359","Type":"ContainerDied","Data":"3083a62009353315b9fc731e7060bf2fdf582bb6651774b6421dccef77849ffe"} Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.100358 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.162638 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.162728 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf9mg\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-kube-api-access-zf9mg\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.162787 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-bootstrap-combined-ca-bundle\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.162817 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-repo-setup-combined-ca-bundle\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.162895 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-neutron-metadata-combined-ca-bundle\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.162951 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ovn-combined-ca-bundle\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.162991 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-nova-combined-ca-bundle\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.163041 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-ovn-default-certs-0\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.163092 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-inventory\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.163124 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-telemetry-combined-ca-bundle\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.163188 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-libvirt-combined-ca-bundle\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.163228 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ssh-key-openstack-edpm-ipam\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.163339 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.163391 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.173039 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.173216 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.173235 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-kube-api-access-zf9mg" (OuterVolumeSpecName: "kube-api-access-zf9mg") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "kube-api-access-zf9mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.180847 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.181220 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.181478 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.192581 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.192814 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.192658 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.192726 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.192924 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.194387 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.214040 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.219713 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-inventory" (OuterVolumeSpecName: "inventory") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265637 5012 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265675 5012 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265686 5012 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265698 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265707 5012 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265718 5012 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265727 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265736 5012 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265748 5012 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265758 5012 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265768 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf9mg\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-kube-api-access-zf9mg\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265779 5012 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265788 5012 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265799 5012 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.525515 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" event={"ID":"d869003b-7b03-4a8b-9f9c-73ca0ec4f359","Type":"ContainerDied","Data":"c79daf974f33d700f2f0838eecbe85e8cd1c1c0b3e0d9db46ea76aecfbdd9d4f"} Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.525564 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c79daf974f33d700f2f0838eecbe85e8cd1c1c0b3e0d9db46ea76aecfbdd9d4f" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.525601 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.752771 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx"] Feb 19 05:58:00 crc kubenswrapper[5012]: E0219 05:58:00.753532 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d869003b-7b03-4a8b-9f9c-73ca0ec4f359" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.753564 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d869003b-7b03-4a8b-9f9c-73ca0ec4f359" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.753910 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d869003b-7b03-4a8b-9f9c-73ca0ec4f359" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.755170 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.757890 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.763340 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.763665 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.763877 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.764034 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.766622 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx"] Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.882169 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5x75\" (UniqueName: \"kubernetes.io/projected/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-kube-api-access-z5x75\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.882282 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.882391 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.882492 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.882610 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.984206 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.984389 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.984558 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.984714 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.985434 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.985480 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5x75\" (UniqueName: \"kubernetes.io/projected/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-kube-api-access-z5x75\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.991424 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.992868 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.999858 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:01 crc kubenswrapper[5012]: I0219 05:58:01.005512 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5x75\" (UniqueName: \"kubernetes.io/projected/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-kube-api-access-z5x75\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:01 crc kubenswrapper[5012]: I0219 05:58:01.087966 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:01 crc kubenswrapper[5012]: I0219 05:58:01.712228 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx"] Feb 19 05:58:01 crc kubenswrapper[5012]: W0219 05:58:01.716945 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7335769e_5b13_4d1b_8aa7_e7f192ee9e2b.slice/crio-51044e02dbb31421e2a2a301042a93613a0c26e6fdd9521b17f1f9f10c4eb731 WatchSource:0}: Error finding container 51044e02dbb31421e2a2a301042a93613a0c26e6fdd9521b17f1f9f10c4eb731: Status 404 returned error can't find the container with id 51044e02dbb31421e2a2a301042a93613a0c26e6fdd9521b17f1f9f10c4eb731 Feb 19 05:58:02 crc kubenswrapper[5012]: I0219 05:58:02.550049 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" event={"ID":"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b","Type":"ContainerStarted","Data":"51044e02dbb31421e2a2a301042a93613a0c26e6fdd9521b17f1f9f10c4eb731"} Feb 19 05:58:03 crc kubenswrapper[5012]: I0219 05:58:03.564295 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" event={"ID":"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b","Type":"ContainerStarted","Data":"f923d7786be4a9ab567db6de15be49bd354ff86095fbe61b564432c2dfb881d3"} Feb 19 05:58:03 crc kubenswrapper[5012]: I0219 05:58:03.587365 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" podStartSLOduration=3.080080611 podStartE2EDuration="3.587338961s" podCreationTimestamp="2026-02-19 05:58:00 +0000 UTC" firstStartedPulling="2026-02-19 05:58:01.719644135 +0000 UTC m=+1977.752966704" lastFinishedPulling="2026-02-19 05:58:02.226902455 +0000 UTC m=+1978.260225054" observedRunningTime="2026-02-19 05:58:03.582468211 +0000 UTC m=+1979.615790810" watchObservedRunningTime="2026-02-19 05:58:03.587338961 +0000 UTC m=+1979.620661570" Feb 19 05:58:14 crc kubenswrapper[5012]: I0219 05:58:14.430844 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:58:14 crc kubenswrapper[5012]: I0219 05:58:14.431368 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:58:14 crc kubenswrapper[5012]: I0219 05:58:14.431417 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:58:14 crc kubenswrapper[5012]: I0219 05:58:14.432222 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50740295b4ff1d8fcf9687906fffd0580ff7c4139e466c7a77580870ab679afe"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:58:14 crc kubenswrapper[5012]: I0219 05:58:14.432292 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://50740295b4ff1d8fcf9687906fffd0580ff7c4139e466c7a77580870ab679afe" gracePeriod=600 Feb 19 05:58:14 crc kubenswrapper[5012]: I0219 05:58:14.676092 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="50740295b4ff1d8fcf9687906fffd0580ff7c4139e466c7a77580870ab679afe" exitCode=0 Feb 19 05:58:14 crc kubenswrapper[5012]: I0219 05:58:14.676288 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"50740295b4ff1d8fcf9687906fffd0580ff7c4139e466c7a77580870ab679afe"} Feb 19 05:58:14 crc kubenswrapper[5012]: I0219 05:58:14.676385 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:58:15 crc kubenswrapper[5012]: I0219 05:58:15.691941 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f"} Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.184086 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bmlpm"] Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.187547 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.210681 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bmlpm"] Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.251844 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-utilities\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.252481 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9x8j\" (UniqueName: \"kubernetes.io/projected/5db1fe46-364c-49e0-96a5-5f2deba8029b-kube-api-access-j9x8j\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.252546 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-catalog-content\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.356154 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-utilities\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.356251 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9x8j\" (UniqueName: \"kubernetes.io/projected/5db1fe46-364c-49e0-96a5-5f2deba8029b-kube-api-access-j9x8j\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.356330 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-catalog-content\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.356902 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-catalog-content\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.360903 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-utilities\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.379921 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9x8j\" (UniqueName: \"kubernetes.io/projected/5db1fe46-364c-49e0-96a5-5f2deba8029b-kube-api-access-j9x8j\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.510256 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:45 crc kubenswrapper[5012]: I0219 05:58:45.014871 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bmlpm"] Feb 19 05:58:45 crc kubenswrapper[5012]: I0219 05:58:45.103286 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmlpm" event={"ID":"5db1fe46-364c-49e0-96a5-5f2deba8029b","Type":"ContainerStarted","Data":"6abd3d316cec6f5cbf6e459a814eb5160689ca42d876e80b63aaf9e3233e6715"} Feb 19 05:58:46 crc kubenswrapper[5012]: I0219 05:58:46.111958 5012 generic.go:334] "Generic (PLEG): container finished" podID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerID="6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b" exitCode=0 Feb 19 05:58:46 crc kubenswrapper[5012]: I0219 05:58:46.112024 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmlpm" event={"ID":"5db1fe46-364c-49e0-96a5-5f2deba8029b","Type":"ContainerDied","Data":"6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b"} Feb 19 05:58:48 crc kubenswrapper[5012]: I0219 05:58:48.761979 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmlpm" event={"ID":"5db1fe46-364c-49e0-96a5-5f2deba8029b","Type":"ContainerStarted","Data":"b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3"} Feb 19 05:58:51 crc kubenswrapper[5012]: I0219 05:58:51.804353 5012 generic.go:334] "Generic (PLEG): container finished" podID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerID="b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3" exitCode=0 Feb 19 05:58:51 crc kubenswrapper[5012]: I0219 05:58:51.804476 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmlpm" event={"ID":"5db1fe46-364c-49e0-96a5-5f2deba8029b","Type":"ContainerDied","Data":"b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3"} Feb 19 05:58:52 crc kubenswrapper[5012]: I0219 05:58:52.823240 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmlpm" event={"ID":"5db1fe46-364c-49e0-96a5-5f2deba8029b","Type":"ContainerStarted","Data":"9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044"} Feb 19 05:58:52 crc kubenswrapper[5012]: I0219 05:58:52.852236 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bmlpm" podStartSLOduration=2.705265775 podStartE2EDuration="8.852210549s" podCreationTimestamp="2026-02-19 05:58:44 +0000 UTC" firstStartedPulling="2026-02-19 05:58:46.114903329 +0000 UTC m=+2022.148225898" lastFinishedPulling="2026-02-19 05:58:52.261848093 +0000 UTC m=+2028.295170672" observedRunningTime="2026-02-19 05:58:52.851479161 +0000 UTC m=+2028.884801760" watchObservedRunningTime="2026-02-19 05:58:52.852210549 +0000 UTC m=+2028.885533148" Feb 19 05:58:54 crc kubenswrapper[5012]: I0219 05:58:54.511531 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:54 crc kubenswrapper[5012]: I0219 05:58:54.511932 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:55 crc kubenswrapper[5012]: I0219 05:58:55.575351 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bmlpm" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="registry-server" probeResult="failure" output=< Feb 19 05:58:55 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 05:58:55 crc kubenswrapper[5012]: > Feb 19 05:59:04 crc kubenswrapper[5012]: I0219 05:59:04.559473 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:59:04 crc kubenswrapper[5012]: I0219 05:59:04.603669 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:59:04 crc kubenswrapper[5012]: I0219 05:59:04.799503 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bmlpm"] Feb 19 05:59:05 crc kubenswrapper[5012]: I0219 05:59:05.972882 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bmlpm" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="registry-server" containerID="cri-o://9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044" gracePeriod=2 Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.610690 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.724101 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-catalog-content\") pod \"5db1fe46-364c-49e0-96a5-5f2deba8029b\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.724202 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9x8j\" (UniqueName: \"kubernetes.io/projected/5db1fe46-364c-49e0-96a5-5f2deba8029b-kube-api-access-j9x8j\") pod \"5db1fe46-364c-49e0-96a5-5f2deba8029b\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.724247 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-utilities\") pod \"5db1fe46-364c-49e0-96a5-5f2deba8029b\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.725985 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-utilities" (OuterVolumeSpecName: "utilities") pod "5db1fe46-364c-49e0-96a5-5f2deba8029b" (UID: "5db1fe46-364c-49e0-96a5-5f2deba8029b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.733517 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db1fe46-364c-49e0-96a5-5f2deba8029b-kube-api-access-j9x8j" (OuterVolumeSpecName: "kube-api-access-j9x8j") pod "5db1fe46-364c-49e0-96a5-5f2deba8029b" (UID: "5db1fe46-364c-49e0-96a5-5f2deba8029b"). InnerVolumeSpecName "kube-api-access-j9x8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.827437 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9x8j\" (UniqueName: \"kubernetes.io/projected/5db1fe46-364c-49e0-96a5-5f2deba8029b-kube-api-access-j9x8j\") on node \"crc\" DevicePath \"\"" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.827484 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.896706 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5db1fe46-364c-49e0-96a5-5f2deba8029b" (UID: "5db1fe46-364c-49e0-96a5-5f2deba8029b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.928877 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.989661 5012 generic.go:334] "Generic (PLEG): container finished" podID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerID="9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044" exitCode=0 Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.989730 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmlpm" event={"ID":"5db1fe46-364c-49e0-96a5-5f2deba8029b","Type":"ContainerDied","Data":"9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044"} Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.989796 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.989819 5012 scope.go:117] "RemoveContainer" containerID="9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.989798 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmlpm" event={"ID":"5db1fe46-364c-49e0-96a5-5f2deba8029b","Type":"ContainerDied","Data":"6abd3d316cec6f5cbf6e459a814eb5160689ca42d876e80b63aaf9e3233e6715"} Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.043554 5012 scope.go:117] "RemoveContainer" containerID="b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3" Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.046245 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bmlpm"] Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.067347 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bmlpm"] Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.072007 5012 scope.go:117] "RemoveContainer" containerID="6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b" Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.139027 5012 scope.go:117] "RemoveContainer" containerID="9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044" Feb 19 05:59:07 crc kubenswrapper[5012]: E0219 05:59:07.139611 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044\": container with ID starting with 9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044 not found: ID does not exist" containerID="9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044" Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.139662 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044"} err="failed to get container status \"9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044\": rpc error: code = NotFound desc = could not find container \"9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044\": container with ID starting with 9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044 not found: ID does not exist" Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.139698 5012 scope.go:117] "RemoveContainer" containerID="b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3" Feb 19 05:59:07 crc kubenswrapper[5012]: E0219 05:59:07.140084 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3\": container with ID starting with b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3 not found: ID does not exist" containerID="b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3" Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.140130 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3"} err="failed to get container status \"b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3\": rpc error: code = NotFound desc = could not find container \"b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3\": container with ID starting with b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3 not found: ID does not exist" Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.140156 5012 scope.go:117] "RemoveContainer" containerID="6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b" Feb 19 05:59:07 crc kubenswrapper[5012]: E0219 05:59:07.140522 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b\": container with ID starting with 6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b not found: ID does not exist" containerID="6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b" Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.140610 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b"} err="failed to get container status \"6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b\": rpc error: code = NotFound desc = could not find container \"6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b\": container with ID starting with 6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b not found: ID does not exist" Feb 19 05:59:08 crc kubenswrapper[5012]: I0219 05:59:08.722454 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" path="/var/lib/kubelet/pods/5db1fe46-364c-49e0-96a5-5f2deba8029b/volumes" Feb 19 05:59:15 crc kubenswrapper[5012]: I0219 05:59:15.093097 5012 generic.go:334] "Generic (PLEG): container finished" podID="7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" containerID="f923d7786be4a9ab567db6de15be49bd354ff86095fbe61b564432c2dfb881d3" exitCode=0 Feb 19 05:59:15 crc kubenswrapper[5012]: I0219 05:59:15.093206 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" event={"ID":"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b","Type":"ContainerDied","Data":"f923d7786be4a9ab567db6de15be49bd354ff86095fbe61b564432c2dfb881d3"} Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.695847 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.761269 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ssh-key-openstack-edpm-ipam\") pod \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.761491 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-inventory\") pod \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.761586 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5x75\" (UniqueName: \"kubernetes.io/projected/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-kube-api-access-z5x75\") pod \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.761633 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovn-combined-ca-bundle\") pod \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.761677 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovncontroller-config-0\") pod \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.771220 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" (UID: "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.771477 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-kube-api-access-z5x75" (OuterVolumeSpecName: "kube-api-access-z5x75") pod "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" (UID: "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b"). InnerVolumeSpecName "kube-api-access-z5x75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.804794 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" (UID: "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.810297 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-inventory" (OuterVolumeSpecName: "inventory") pod "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" (UID: "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.816166 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" (UID: "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.864771 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.864935 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.865026 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5x75\" (UniqueName: \"kubernetes.io/projected/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-kube-api-access-z5x75\") on node \"crc\" DevicePath \"\"" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.865111 5012 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.865195 5012 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.114590 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" event={"ID":"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b","Type":"ContainerDied","Data":"51044e02dbb31421e2a2a301042a93613a0c26e6fdd9521b17f1f9f10c4eb731"} Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.114829 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51044e02dbb31421e2a2a301042a93613a0c26e6fdd9521b17f1f9f10c4eb731" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.114663 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.228408 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2"] Feb 19 05:59:17 crc kubenswrapper[5012]: E0219 05:59:17.228823 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="registry-server" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.228840 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="registry-server" Feb 19 05:59:17 crc kubenswrapper[5012]: E0219 05:59:17.228857 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="extract-utilities" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.228863 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="extract-utilities" Feb 19 05:59:17 crc kubenswrapper[5012]: E0219 05:59:17.228882 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.228889 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 05:59:17 crc kubenswrapper[5012]: E0219 05:59:17.228900 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="extract-content" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.228908 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="extract-content" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.229117 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.229129 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="registry-server" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.229839 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.232110 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.232328 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.232586 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.232768 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.232949 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.234832 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.240523 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2"] Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.272413 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.272464 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xstqg\" (UniqueName: \"kubernetes.io/projected/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-kube-api-access-xstqg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.272494 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.272639 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.272768 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.272818 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.374828 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.374935 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xstqg\" (UniqueName: \"kubernetes.io/projected/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-kube-api-access-xstqg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.374989 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.375062 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.375182 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.375228 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.381432 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.382438 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.383148 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.385386 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.390216 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.396105 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xstqg\" (UniqueName: \"kubernetes.io/projected/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-kube-api-access-xstqg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.568019 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:18 crc kubenswrapper[5012]: W0219 05:59:18.146442 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod534720dc_6ff8_4fdc_9337_6fe77ad1eaa8.slice/crio-d8e3d3b95bc18dd0efc3e5005a2b6653906ccf976588952e1eb773f69f99d7e8 WatchSource:0}: Error finding container d8e3d3b95bc18dd0efc3e5005a2b6653906ccf976588952e1eb773f69f99d7e8: Status 404 returned error can't find the container with id d8e3d3b95bc18dd0efc3e5005a2b6653906ccf976588952e1eb773f69f99d7e8 Feb 19 05:59:18 crc kubenswrapper[5012]: I0219 05:59:18.151541 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2"] Feb 19 05:59:19 crc kubenswrapper[5012]: I0219 05:59:19.138444 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" event={"ID":"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8","Type":"ContainerStarted","Data":"a8c9ad25ba00cba89d94c38e3c88674ed58044e807c2ddd9b18cd3c2ad5f8504"} Feb 19 05:59:19 crc kubenswrapper[5012]: I0219 05:59:19.138811 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" event={"ID":"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8","Type":"ContainerStarted","Data":"d8e3d3b95bc18dd0efc3e5005a2b6653906ccf976588952e1eb773f69f99d7e8"} Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.152412 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" podStartSLOduration=42.687807216 podStartE2EDuration="43.152393589s" podCreationTimestamp="2026-02-19 05:59:17 +0000 UTC" firstStartedPulling="2026-02-19 05:59:18.148645428 +0000 UTC m=+2054.181967997" lastFinishedPulling="2026-02-19 05:59:18.613231761 +0000 UTC m=+2054.646554370" observedRunningTime="2026-02-19 05:59:19.163792992 +0000 UTC m=+2055.197115611" watchObservedRunningTime="2026-02-19 06:00:00.152393589 +0000 UTC m=+2096.185716168" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.163032 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8"] Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.164963 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.168461 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.168946 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.181781 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8"] Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.276165 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-secret-volume\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.276271 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72p4v\" (UniqueName: \"kubernetes.io/projected/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-kube-api-access-72p4v\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.276882 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-config-volume\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.378969 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-config-volume\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.379041 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-secret-volume\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.379100 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72p4v\" (UniqueName: \"kubernetes.io/projected/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-kube-api-access-72p4v\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.380720 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-config-volume\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.389453 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-secret-volume\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.404251 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72p4v\" (UniqueName: \"kubernetes.io/projected/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-kube-api-access-72p4v\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.490910 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:01 crc kubenswrapper[5012]: I0219 06:00:01.029609 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8"] Feb 19 06:00:01 crc kubenswrapper[5012]: W0219 06:00:01.034635 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5bdc022_3d70_4d4d_8f03_f2cf8b295a7e.slice/crio-05827d7776e4bd92d74a5b97b10bfc4be5b2f23c72eb78074e033a996dd586af WatchSource:0}: Error finding container 05827d7776e4bd92d74a5b97b10bfc4be5b2f23c72eb78074e033a996dd586af: Status 404 returned error can't find the container with id 05827d7776e4bd92d74a5b97b10bfc4be5b2f23c72eb78074e033a996dd586af Feb 19 06:00:01 crc kubenswrapper[5012]: I0219 06:00:01.616739 5012 generic.go:334] "Generic (PLEG): container finished" podID="f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e" containerID="aab8c26b7c272ad359e2397dc4c5c133e04f23474846a5322b643a2a4fdad8bd" exitCode=0 Feb 19 06:00:01 crc kubenswrapper[5012]: I0219 06:00:01.616852 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" event={"ID":"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e","Type":"ContainerDied","Data":"aab8c26b7c272ad359e2397dc4c5c133e04f23474846a5322b643a2a4fdad8bd"} Feb 19 06:00:01 crc kubenswrapper[5012]: I0219 06:00:01.617493 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" event={"ID":"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e","Type":"ContainerStarted","Data":"05827d7776e4bd92d74a5b97b10bfc4be5b2f23c72eb78074e033a996dd586af"} Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.077212 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.135710 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-config-volume\") pod \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.135867 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72p4v\" (UniqueName: \"kubernetes.io/projected/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-kube-api-access-72p4v\") pod \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.136080 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-secret-volume\") pod \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.136792 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-config-volume" (OuterVolumeSpecName: "config-volume") pod "f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e" (UID: "f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.137291 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.141575 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-kube-api-access-72p4v" (OuterVolumeSpecName: "kube-api-access-72p4v") pod "f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e" (UID: "f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e"). InnerVolumeSpecName "kube-api-access-72p4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.144437 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e" (UID: "f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.240477 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72p4v\" (UniqueName: \"kubernetes.io/projected/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-kube-api-access-72p4v\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.240542 5012 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.644026 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" event={"ID":"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e","Type":"ContainerDied","Data":"05827d7776e4bd92d74a5b97b10bfc4be5b2f23c72eb78074e033a996dd586af"} Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.644065 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05827d7776e4bd92d74a5b97b10bfc4be5b2f23c72eb78074e033a996dd586af" Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.644108 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:04 crc kubenswrapper[5012]: I0219 06:00:04.180203 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6"] Feb 19 06:00:04 crc kubenswrapper[5012]: I0219 06:00:04.194420 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6"] Feb 19 06:00:04 crc kubenswrapper[5012]: I0219 06:00:04.724609 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46582f7f-c6b0-4ae3-9103-4a4754304438" path="/var/lib/kubelet/pods/46582f7f-c6b0-4ae3-9103-4a4754304438/volumes" Feb 19 06:00:11 crc kubenswrapper[5012]: E0219 06:00:11.684184 5012 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod534720dc_6ff8_4fdc_9337_6fe77ad1eaa8.slice/crio-a8c9ad25ba00cba89d94c38e3c88674ed58044e807c2ddd9b18cd3c2ad5f8504.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod534720dc_6ff8_4fdc_9337_6fe77ad1eaa8.slice/crio-conmon-a8c9ad25ba00cba89d94c38e3c88674ed58044e807c2ddd9b18cd3c2ad5f8504.scope\": RecentStats: unable to find data in memory cache]" Feb 19 06:00:11 crc kubenswrapper[5012]: I0219 06:00:11.742972 5012 generic.go:334] "Generic (PLEG): container finished" podID="534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" containerID="a8c9ad25ba00cba89d94c38e3c88674ed58044e807c2ddd9b18cd3c2ad5f8504" exitCode=0 Feb 19 06:00:11 crc kubenswrapper[5012]: I0219 06:00:11.743028 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" event={"ID":"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8","Type":"ContainerDied","Data":"a8c9ad25ba00cba89d94c38e3c88674ed58044e807c2ddd9b18cd3c2ad5f8504"} Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.357787 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.393075 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-nova-metadata-neutron-config-0\") pod \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.393240 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-ssh-key-openstack-edpm-ipam\") pod \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.393336 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-metadata-combined-ca-bundle\") pod \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.393421 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xstqg\" (UniqueName: \"kubernetes.io/projected/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-kube-api-access-xstqg\") pod \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.393518 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.394775 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-inventory\") pod \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.429539 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" (UID: "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.471538 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-kube-api-access-xstqg" (OuterVolumeSpecName: "kube-api-access-xstqg") pod "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" (UID: "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8"). InnerVolumeSpecName "kube-api-access-xstqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.483481 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-inventory" (OuterVolumeSpecName: "inventory") pod "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" (UID: "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.497638 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xstqg\" (UniqueName: \"kubernetes.io/projected/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-kube-api-access-xstqg\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.497666 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.497676 5012 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.514517 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" (UID: "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.522580 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" (UID: "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.525529 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" (UID: "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.599396 5012 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.599428 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.599440 5012 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.782686 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" event={"ID":"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8","Type":"ContainerDied","Data":"d8e3d3b95bc18dd0efc3e5005a2b6653906ccf976588952e1eb773f69f99d7e8"} Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.782726 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8e3d3b95bc18dd0efc3e5005a2b6653906ccf976588952e1eb773f69f99d7e8" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.782777 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.882776 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s"] Feb 19 06:00:13 crc kubenswrapper[5012]: E0219 06:00:13.883146 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.883168 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 06:00:13 crc kubenswrapper[5012]: E0219 06:00:13.883184 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e" containerName="collect-profiles" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.883193 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e" containerName="collect-profiles" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.883502 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e" containerName="collect-profiles" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.883522 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.884250 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.886455 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.886801 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.886985 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.891155 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.891229 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.918837 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s"] Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.012603 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g9n7\" (UniqueName: \"kubernetes.io/projected/fcace677-35b0-499f-998c-99168fbfa0af-kube-api-access-6g9n7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.012910 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.012989 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.013234 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.013325 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.115625 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.115668 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.115748 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.115768 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.115807 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g9n7\" (UniqueName: \"kubernetes.io/projected/fcace677-35b0-499f-998c-99168fbfa0af-kube-api-access-6g9n7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.119917 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.120189 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.120209 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.124527 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.145620 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g9n7\" (UniqueName: \"kubernetes.io/projected/fcace677-35b0-499f-998c-99168fbfa0af-kube-api-access-6g9n7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.211358 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.430792 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.431098 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.800055 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s"] Feb 19 06:00:15 crc kubenswrapper[5012]: I0219 06:00:15.810463 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" event={"ID":"fcace677-35b0-499f-998c-99168fbfa0af","Type":"ContainerStarted","Data":"ead8d5cbfbadc07cdc6949287d7eaad0d3adb71e861dbd504d482651d9e45f96"} Feb 19 06:00:15 crc kubenswrapper[5012]: I0219 06:00:15.810887 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" event={"ID":"fcace677-35b0-499f-998c-99168fbfa0af","Type":"ContainerStarted","Data":"845fa55489eb1ebebf023adf297c3cff09eae6d31e26dd2248e57ae7baeee857"} Feb 19 06:00:15 crc kubenswrapper[5012]: I0219 06:00:15.840042 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" podStartSLOduration=2.410188323 podStartE2EDuration="2.840028076s" podCreationTimestamp="2026-02-19 06:00:13 +0000 UTC" firstStartedPulling="2026-02-19 06:00:14.805726272 +0000 UTC m=+2110.839048881" lastFinishedPulling="2026-02-19 06:00:15.235566055 +0000 UTC m=+2111.268888634" observedRunningTime="2026-02-19 06:00:15.836590352 +0000 UTC m=+2111.869912921" watchObservedRunningTime="2026-02-19 06:00:15.840028076 +0000 UTC m=+2111.873350635" Feb 19 06:00:36 crc kubenswrapper[5012]: I0219 06:00:36.820985 5012 scope.go:117] "RemoveContainer" containerID="6ecd18e5cbbb471f815af478d67f7066d4c1bb34788cd0f8db72ff1fe8b502b7" Feb 19 06:00:44 crc kubenswrapper[5012]: I0219 06:00:44.430939 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:00:44 crc kubenswrapper[5012]: I0219 06:00:44.431518 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.164612 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524681-x9bcr"] Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.167681 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.176967 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524681-x9bcr"] Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.291395 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bvq5\" (UniqueName: \"kubernetes.io/projected/86c7e36d-88e3-432a-ad6f-74de626c5f30-kube-api-access-9bvq5\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.291530 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-config-data\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.291574 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-combined-ca-bundle\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.291802 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-fernet-keys\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.393733 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-combined-ca-bundle\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.393859 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-fernet-keys\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.393986 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bvq5\" (UniqueName: \"kubernetes.io/projected/86c7e36d-88e3-432a-ad6f-74de626c5f30-kube-api-access-9bvq5\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.394065 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-config-data\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.402460 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-config-data\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.406992 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-fernet-keys\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.416546 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-combined-ca-bundle\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.417004 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bvq5\" (UniqueName: \"kubernetes.io/projected/86c7e36d-88e3-432a-ad6f-74de626c5f30-kube-api-access-9bvq5\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.501750 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:01 crc kubenswrapper[5012]: I0219 06:01:01.024227 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524681-x9bcr"] Feb 19 06:01:01 crc kubenswrapper[5012]: I0219 06:01:01.322249 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524681-x9bcr" event={"ID":"86c7e36d-88e3-432a-ad6f-74de626c5f30","Type":"ContainerStarted","Data":"f73e2bce52b5a24470ca1d0bb1435f7e6c5323b6116485a81bc0973e84e9a11b"} Feb 19 06:01:01 crc kubenswrapper[5012]: I0219 06:01:01.322703 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524681-x9bcr" event={"ID":"86c7e36d-88e3-432a-ad6f-74de626c5f30","Type":"ContainerStarted","Data":"dd43f2b3a6e18b48ba0032f1d4b181d9e199dbb997ac344570a32216b4d8f020"} Feb 19 06:01:01 crc kubenswrapper[5012]: I0219 06:01:01.355368 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29524681-x9bcr" podStartSLOduration=1.3553444049999999 podStartE2EDuration="1.355344405s" podCreationTimestamp="2026-02-19 06:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 06:01:01.345887123 +0000 UTC m=+2157.379209702" watchObservedRunningTime="2026-02-19 06:01:01.355344405 +0000 UTC m=+2157.388667014" Feb 19 06:01:04 crc kubenswrapper[5012]: I0219 06:01:04.356418 5012 generic.go:334] "Generic (PLEG): container finished" podID="86c7e36d-88e3-432a-ad6f-74de626c5f30" containerID="f73e2bce52b5a24470ca1d0bb1435f7e6c5323b6116485a81bc0973e84e9a11b" exitCode=0 Feb 19 06:01:04 crc kubenswrapper[5012]: I0219 06:01:04.356505 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524681-x9bcr" event={"ID":"86c7e36d-88e3-432a-ad6f-74de626c5f30","Type":"ContainerDied","Data":"f73e2bce52b5a24470ca1d0bb1435f7e6c5323b6116485a81bc0973e84e9a11b"} Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.771012 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.859868 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-fernet-keys\") pod \"86c7e36d-88e3-432a-ad6f-74de626c5f30\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.860040 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-combined-ca-bundle\") pod \"86c7e36d-88e3-432a-ad6f-74de626c5f30\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.860181 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bvq5\" (UniqueName: \"kubernetes.io/projected/86c7e36d-88e3-432a-ad6f-74de626c5f30-kube-api-access-9bvq5\") pod \"86c7e36d-88e3-432a-ad6f-74de626c5f30\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.860227 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-config-data\") pod \"86c7e36d-88e3-432a-ad6f-74de626c5f30\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.866893 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86c7e36d-88e3-432a-ad6f-74de626c5f30-kube-api-access-9bvq5" (OuterVolumeSpecName: "kube-api-access-9bvq5") pod "86c7e36d-88e3-432a-ad6f-74de626c5f30" (UID: "86c7e36d-88e3-432a-ad6f-74de626c5f30"). InnerVolumeSpecName "kube-api-access-9bvq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.867701 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "86c7e36d-88e3-432a-ad6f-74de626c5f30" (UID: "86c7e36d-88e3-432a-ad6f-74de626c5f30"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.903194 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86c7e36d-88e3-432a-ad6f-74de626c5f30" (UID: "86c7e36d-88e3-432a-ad6f-74de626c5f30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.931344 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-config-data" (OuterVolumeSpecName: "config-data") pod "86c7e36d-88e3-432a-ad6f-74de626c5f30" (UID: "86c7e36d-88e3-432a-ad6f-74de626c5f30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.963711 5012 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.964084 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.964133 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bvq5\" (UniqueName: \"kubernetes.io/projected/86c7e36d-88e3-432a-ad6f-74de626c5f30-kube-api-access-9bvq5\") on node \"crc\" DevicePath \"\"" Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.964150 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 06:01:06 crc kubenswrapper[5012]: I0219 06:01:06.388184 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524681-x9bcr" event={"ID":"86c7e36d-88e3-432a-ad6f-74de626c5f30","Type":"ContainerDied","Data":"dd43f2b3a6e18b48ba0032f1d4b181d9e199dbb997ac344570a32216b4d8f020"} Feb 19 06:01:06 crc kubenswrapper[5012]: I0219 06:01:06.388525 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd43f2b3a6e18b48ba0032f1d4b181d9e199dbb997ac344570a32216b4d8f020" Feb 19 06:01:06 crc kubenswrapper[5012]: I0219 06:01:06.388417 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:14 crc kubenswrapper[5012]: I0219 06:01:14.431105 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:01:14 crc kubenswrapper[5012]: I0219 06:01:14.431924 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:01:14 crc kubenswrapper[5012]: I0219 06:01:14.432009 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:01:14 crc kubenswrapper[5012]: I0219 06:01:14.433233 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:01:14 crc kubenswrapper[5012]: I0219 06:01:14.433398 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" gracePeriod=600 Feb 19 06:01:14 crc kubenswrapper[5012]: E0219 06:01:14.560752 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:01:15 crc kubenswrapper[5012]: I0219 06:01:15.509959 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" exitCode=0 Feb 19 06:01:15 crc kubenswrapper[5012]: I0219 06:01:15.510027 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f"} Feb 19 06:01:15 crc kubenswrapper[5012]: I0219 06:01:15.510438 5012 scope.go:117] "RemoveContainer" containerID="50740295b4ff1d8fcf9687906fffd0580ff7c4139e466c7a77580870ab679afe" Feb 19 06:01:15 crc kubenswrapper[5012]: I0219 06:01:15.511134 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:01:15 crc kubenswrapper[5012]: E0219 06:01:15.511733 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:01:26 crc kubenswrapper[5012]: I0219 06:01:26.704125 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:01:26 crc kubenswrapper[5012]: E0219 06:01:26.705264 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:01:36 crc kubenswrapper[5012]: I0219 06:01:36.906937 5012 scope.go:117] "RemoveContainer" containerID="88705a5b47e877865905bfec0d79a37661c7afd39bd29b3b62dcb301a3a591e6" Feb 19 06:01:36 crc kubenswrapper[5012]: I0219 06:01:36.939193 5012 scope.go:117] "RemoveContainer" containerID="f2c73daa7912b8b42ff12ed6bf21505d1239d1f38a626a18bd4a378076264990" Feb 19 06:01:37 crc kubenswrapper[5012]: I0219 06:01:37.030251 5012 scope.go:117] "RemoveContainer" containerID="7a7a28b9019ae634e7419610ad5d6e6779acece549fd96ddb1633a5dbbf4b985" Feb 19 06:01:38 crc kubenswrapper[5012]: I0219 06:01:38.704002 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:01:38 crc kubenswrapper[5012]: E0219 06:01:38.704938 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:01:39 crc kubenswrapper[5012]: I0219 06:01:39.761724 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9nqr4"] Feb 19 06:01:39 crc kubenswrapper[5012]: E0219 06:01:39.762527 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c7e36d-88e3-432a-ad6f-74de626c5f30" containerName="keystone-cron" Feb 19 06:01:39 crc kubenswrapper[5012]: I0219 06:01:39.762558 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c7e36d-88e3-432a-ad6f-74de626c5f30" containerName="keystone-cron" Feb 19 06:01:39 crc kubenswrapper[5012]: I0219 06:01:39.763124 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c7e36d-88e3-432a-ad6f-74de626c5f30" containerName="keystone-cron" Feb 19 06:01:39 crc kubenswrapper[5012]: I0219 06:01:39.766976 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:39 crc kubenswrapper[5012]: I0219 06:01:39.789698 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nqr4"] Feb 19 06:01:39 crc kubenswrapper[5012]: I0219 06:01:39.957216 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bt4l\" (UniqueName: \"kubernetes.io/projected/d888e883-8262-4386-b91a-14e87cd7fed3-kube-api-access-9bt4l\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:39 crc kubenswrapper[5012]: I0219 06:01:39.957265 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-catalog-content\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:39 crc kubenswrapper[5012]: I0219 06:01:39.957447 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-utilities\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.058855 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-utilities\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.059385 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-utilities\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.059647 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bt4l\" (UniqueName: \"kubernetes.io/projected/d888e883-8262-4386-b91a-14e87cd7fed3-kube-api-access-9bt4l\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.059733 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-catalog-content\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.060241 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-catalog-content\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.081024 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bt4l\" (UniqueName: \"kubernetes.io/projected/d888e883-8262-4386-b91a-14e87cd7fed3-kube-api-access-9bt4l\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.102146 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.613885 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nqr4"] Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.865017 5012 generic.go:334] "Generic (PLEG): container finished" podID="d888e883-8262-4386-b91a-14e87cd7fed3" containerID="90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6" exitCode=0 Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.865085 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nqr4" event={"ID":"d888e883-8262-4386-b91a-14e87cd7fed3","Type":"ContainerDied","Data":"90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6"} Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.865405 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nqr4" event={"ID":"d888e883-8262-4386-b91a-14e87cd7fed3","Type":"ContainerStarted","Data":"d4f572a48e3284c87552e4d72878660194d42b6a59b07ece38f080eec80bec4c"} Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.867404 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:01:41 crc kubenswrapper[5012]: I0219 06:01:41.883490 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nqr4" event={"ID":"d888e883-8262-4386-b91a-14e87cd7fed3","Type":"ContainerStarted","Data":"9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7"} Feb 19 06:01:42 crc kubenswrapper[5012]: I0219 06:01:42.900199 5012 generic.go:334] "Generic (PLEG): container finished" podID="d888e883-8262-4386-b91a-14e87cd7fed3" containerID="9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7" exitCode=0 Feb 19 06:01:42 crc kubenswrapper[5012]: I0219 06:01:42.900274 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nqr4" event={"ID":"d888e883-8262-4386-b91a-14e87cd7fed3","Type":"ContainerDied","Data":"9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7"} Feb 19 06:01:43 crc kubenswrapper[5012]: I0219 06:01:43.912370 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nqr4" event={"ID":"d888e883-8262-4386-b91a-14e87cd7fed3","Type":"ContainerStarted","Data":"4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1"} Feb 19 06:01:43 crc kubenswrapper[5012]: I0219 06:01:43.956731 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9nqr4" podStartSLOduration=2.529690358 podStartE2EDuration="4.956706069s" podCreationTimestamp="2026-02-19 06:01:39 +0000 UTC" firstStartedPulling="2026-02-19 06:01:40.867091301 +0000 UTC m=+2196.900413880" lastFinishedPulling="2026-02-19 06:01:43.294106992 +0000 UTC m=+2199.327429591" observedRunningTime="2026-02-19 06:01:43.950593299 +0000 UTC m=+2199.983915878" watchObservedRunningTime="2026-02-19 06:01:43.956706069 +0000 UTC m=+2199.990028648" Feb 19 06:01:50 crc kubenswrapper[5012]: I0219 06:01:50.103380 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:50 crc kubenswrapper[5012]: I0219 06:01:50.103976 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:50 crc kubenswrapper[5012]: I0219 06:01:50.179028 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:51 crc kubenswrapper[5012]: I0219 06:01:51.056651 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:51 crc kubenswrapper[5012]: I0219 06:01:51.108025 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nqr4"] Feb 19 06:01:52 crc kubenswrapper[5012]: I0219 06:01:52.703167 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:01:52 crc kubenswrapper[5012]: E0219 06:01:52.704011 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.031436 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9nqr4" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" containerName="registry-server" containerID="cri-o://4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1" gracePeriod=2 Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.619158 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.710184 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bt4l\" (UniqueName: \"kubernetes.io/projected/d888e883-8262-4386-b91a-14e87cd7fed3-kube-api-access-9bt4l\") pod \"d888e883-8262-4386-b91a-14e87cd7fed3\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.710292 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-utilities\") pod \"d888e883-8262-4386-b91a-14e87cd7fed3\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.710388 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-catalog-content\") pod \"d888e883-8262-4386-b91a-14e87cd7fed3\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.711890 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-utilities" (OuterVolumeSpecName: "utilities") pod "d888e883-8262-4386-b91a-14e87cd7fed3" (UID: "d888e883-8262-4386-b91a-14e87cd7fed3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.712620 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.717384 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d888e883-8262-4386-b91a-14e87cd7fed3-kube-api-access-9bt4l" (OuterVolumeSpecName: "kube-api-access-9bt4l") pod "d888e883-8262-4386-b91a-14e87cd7fed3" (UID: "d888e883-8262-4386-b91a-14e87cd7fed3"). InnerVolumeSpecName "kube-api-access-9bt4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.780833 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d888e883-8262-4386-b91a-14e87cd7fed3" (UID: "d888e883-8262-4386-b91a-14e87cd7fed3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.815717 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bt4l\" (UniqueName: \"kubernetes.io/projected/d888e883-8262-4386-b91a-14e87cd7fed3-kube-api-access-9bt4l\") on node \"crc\" DevicePath \"\"" Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.815773 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.046052 5012 generic.go:334] "Generic (PLEG): container finished" podID="d888e883-8262-4386-b91a-14e87cd7fed3" containerID="4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1" exitCode=0 Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.046160 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.046182 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nqr4" event={"ID":"d888e883-8262-4386-b91a-14e87cd7fed3","Type":"ContainerDied","Data":"4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1"} Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.046236 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nqr4" event={"ID":"d888e883-8262-4386-b91a-14e87cd7fed3","Type":"ContainerDied","Data":"d4f572a48e3284c87552e4d72878660194d42b6a59b07ece38f080eec80bec4c"} Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.046344 5012 scope.go:117] "RemoveContainer" containerID="4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.089774 5012 scope.go:117] "RemoveContainer" containerID="9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.091241 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nqr4"] Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.103370 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9nqr4"] Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.124168 5012 scope.go:117] "RemoveContainer" containerID="90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.194858 5012 scope.go:117] "RemoveContainer" containerID="4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1" Feb 19 06:01:54 crc kubenswrapper[5012]: E0219 06:01:54.195372 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1\": container with ID starting with 4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1 not found: ID does not exist" containerID="4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.195403 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1"} err="failed to get container status \"4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1\": rpc error: code = NotFound desc = could not find container \"4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1\": container with ID starting with 4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1 not found: ID does not exist" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.195423 5012 scope.go:117] "RemoveContainer" containerID="9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7" Feb 19 06:01:54 crc kubenswrapper[5012]: E0219 06:01:54.195807 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7\": container with ID starting with 9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7 not found: ID does not exist" containerID="9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.195850 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7"} err="failed to get container status \"9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7\": rpc error: code = NotFound desc = could not find container \"9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7\": container with ID starting with 9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7 not found: ID does not exist" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.195881 5012 scope.go:117] "RemoveContainer" containerID="90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6" Feb 19 06:01:54 crc kubenswrapper[5012]: E0219 06:01:54.196275 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6\": container with ID starting with 90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6 not found: ID does not exist" containerID="90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.196296 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6"} err="failed to get container status \"90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6\": rpc error: code = NotFound desc = could not find container \"90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6\": container with ID starting with 90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6 not found: ID does not exist" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.721345 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" path="/var/lib/kubelet/pods/d888e883-8262-4386-b91a-14e87cd7fed3/volumes" Feb 19 06:02:05 crc kubenswrapper[5012]: I0219 06:02:05.702837 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:02:05 crc kubenswrapper[5012]: E0219 06:02:05.703989 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:02:16 crc kubenswrapper[5012]: I0219 06:02:16.705364 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:02:16 crc kubenswrapper[5012]: E0219 06:02:16.706464 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:02:27 crc kubenswrapper[5012]: I0219 06:02:27.703471 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:02:27 crc kubenswrapper[5012]: E0219 06:02:27.704479 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:02:42 crc kubenswrapper[5012]: I0219 06:02:42.703411 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:02:42 crc kubenswrapper[5012]: E0219 06:02:42.704501 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.191871 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qd2s5"] Feb 19 06:02:56 crc kubenswrapper[5012]: E0219 06:02:56.192998 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" containerName="extract-utilities" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.193047 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" containerName="extract-utilities" Feb 19 06:02:56 crc kubenswrapper[5012]: E0219 06:02:56.193076 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" containerName="extract-content" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.193099 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" containerName="extract-content" Feb 19 06:02:56 crc kubenswrapper[5012]: E0219 06:02:56.193131 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" containerName="registry-server" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.193138 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" containerName="registry-server" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.193386 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" containerName="registry-server" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.195106 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.229354 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qd2s5"] Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.364569 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-catalog-content\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.364635 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hldnt\" (UniqueName: \"kubernetes.io/projected/f3f94370-8ffb-4a67-9042-898ee37ed2a8-kube-api-access-hldnt\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.364710 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-utilities\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.466153 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-catalog-content\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.466227 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hldnt\" (UniqueName: \"kubernetes.io/projected/f3f94370-8ffb-4a67-9042-898ee37ed2a8-kube-api-access-hldnt\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.466317 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-utilities\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.466681 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-catalog-content\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.466737 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-utilities\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.487402 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hldnt\" (UniqueName: \"kubernetes.io/projected/f3f94370-8ffb-4a67-9042-898ee37ed2a8-kube-api-access-hldnt\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.521614 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.707084 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:02:56 crc kubenswrapper[5012]: E0219 06:02:56.709500 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:02:57 crc kubenswrapper[5012]: I0219 06:02:57.029020 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qd2s5"] Feb 19 06:02:57 crc kubenswrapper[5012]: I0219 06:02:57.835391 5012 generic.go:334] "Generic (PLEG): container finished" podID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerID="0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb" exitCode=0 Feb 19 06:02:57 crc kubenswrapper[5012]: I0219 06:02:57.835484 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qd2s5" event={"ID":"f3f94370-8ffb-4a67-9042-898ee37ed2a8","Type":"ContainerDied","Data":"0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb"} Feb 19 06:02:57 crc kubenswrapper[5012]: I0219 06:02:57.835863 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qd2s5" event={"ID":"f3f94370-8ffb-4a67-9042-898ee37ed2a8","Type":"ContainerStarted","Data":"c230c80d248b01025994ea307fc0f0128580771c2a08a4d5f702819870fdea83"} Feb 19 06:02:58 crc kubenswrapper[5012]: I0219 06:02:58.850662 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qd2s5" event={"ID":"f3f94370-8ffb-4a67-9042-898ee37ed2a8","Type":"ContainerStarted","Data":"7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f"} Feb 19 06:02:59 crc kubenswrapper[5012]: I0219 06:02:59.869498 5012 generic.go:334] "Generic (PLEG): container finished" podID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerID="7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f" exitCode=0 Feb 19 06:02:59 crc kubenswrapper[5012]: I0219 06:02:59.869555 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qd2s5" event={"ID":"f3f94370-8ffb-4a67-9042-898ee37ed2a8","Type":"ContainerDied","Data":"7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f"} Feb 19 06:03:00 crc kubenswrapper[5012]: I0219 06:03:00.882169 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qd2s5" event={"ID":"f3f94370-8ffb-4a67-9042-898ee37ed2a8","Type":"ContainerStarted","Data":"553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff"} Feb 19 06:03:00 crc kubenswrapper[5012]: I0219 06:03:00.903174 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qd2s5" podStartSLOduration=2.469018074 podStartE2EDuration="4.903143987s" podCreationTimestamp="2026-02-19 06:02:56 +0000 UTC" firstStartedPulling="2026-02-19 06:02:57.838323002 +0000 UTC m=+2273.871645571" lastFinishedPulling="2026-02-19 06:03:00.272448905 +0000 UTC m=+2276.305771484" observedRunningTime="2026-02-19 06:03:00.902164403 +0000 UTC m=+2276.935487002" watchObservedRunningTime="2026-02-19 06:03:00.903143987 +0000 UTC m=+2276.936466596" Feb 19 06:03:06 crc kubenswrapper[5012]: I0219 06:03:06.522058 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:03:06 crc kubenswrapper[5012]: I0219 06:03:06.522893 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:03:06 crc kubenswrapper[5012]: I0219 06:03:06.577285 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:03:07 crc kubenswrapper[5012]: I0219 06:03:07.040952 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:03:07 crc kubenswrapper[5012]: I0219 06:03:07.097631 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qd2s5"] Feb 19 06:03:08 crc kubenswrapper[5012]: I0219 06:03:08.704202 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:03:08 crc kubenswrapper[5012]: E0219 06:03:08.704787 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:03:08 crc kubenswrapper[5012]: I0219 06:03:08.979761 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qd2s5" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerName="registry-server" containerID="cri-o://553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff" gracePeriod=2 Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.531290 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.722471 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-utilities\") pod \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.722555 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hldnt\" (UniqueName: \"kubernetes.io/projected/f3f94370-8ffb-4a67-9042-898ee37ed2a8-kube-api-access-hldnt\") pod \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.722673 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-catalog-content\") pod \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.724633 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-utilities" (OuterVolumeSpecName: "utilities") pod "f3f94370-8ffb-4a67-9042-898ee37ed2a8" (UID: "f3f94370-8ffb-4a67-9042-898ee37ed2a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.750389 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3f94370-8ffb-4a67-9042-898ee37ed2a8-kube-api-access-hldnt" (OuterVolumeSpecName: "kube-api-access-hldnt") pod "f3f94370-8ffb-4a67-9042-898ee37ed2a8" (UID: "f3f94370-8ffb-4a67-9042-898ee37ed2a8"). InnerVolumeSpecName "kube-api-access-hldnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.757093 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3f94370-8ffb-4a67-9042-898ee37ed2a8" (UID: "f3f94370-8ffb-4a67-9042-898ee37ed2a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.826027 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hldnt\" (UniqueName: \"kubernetes.io/projected/f3f94370-8ffb-4a67-9042-898ee37ed2a8-kube-api-access-hldnt\") on node \"crc\" DevicePath \"\"" Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.826073 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.826094 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.001877 5012 generic.go:334] "Generic (PLEG): container finished" podID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerID="553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff" exitCode=0 Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.001946 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qd2s5" event={"ID":"f3f94370-8ffb-4a67-9042-898ee37ed2a8","Type":"ContainerDied","Data":"553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff"} Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.001990 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qd2s5" event={"ID":"f3f94370-8ffb-4a67-9042-898ee37ed2a8","Type":"ContainerDied","Data":"c230c80d248b01025994ea307fc0f0128580771c2a08a4d5f702819870fdea83"} Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.002023 5012 scope.go:117] "RemoveContainer" containerID="553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.002206 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.034738 5012 scope.go:117] "RemoveContainer" containerID="7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.060564 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qd2s5"] Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.062194 5012 scope.go:117] "RemoveContainer" containerID="0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.072764 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qd2s5"] Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.142373 5012 scope.go:117] "RemoveContainer" containerID="553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff" Feb 19 06:03:10 crc kubenswrapper[5012]: E0219 06:03:10.142962 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff\": container with ID starting with 553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff not found: ID does not exist" containerID="553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.143011 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff"} err="failed to get container status \"553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff\": rpc error: code = NotFound desc = could not find container \"553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff\": container with ID starting with 553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff not found: ID does not exist" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.143043 5012 scope.go:117] "RemoveContainer" containerID="7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f" Feb 19 06:03:10 crc kubenswrapper[5012]: E0219 06:03:10.145570 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f\": container with ID starting with 7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f not found: ID does not exist" containerID="7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.145618 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f"} err="failed to get container status \"7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f\": rpc error: code = NotFound desc = could not find container \"7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f\": container with ID starting with 7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f not found: ID does not exist" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.145647 5012 scope.go:117] "RemoveContainer" containerID="0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb" Feb 19 06:03:10 crc kubenswrapper[5012]: E0219 06:03:10.146026 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb\": container with ID starting with 0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb not found: ID does not exist" containerID="0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.146060 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb"} err="failed to get container status \"0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb\": rpc error: code = NotFound desc = could not find container \"0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb\": container with ID starting with 0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb not found: ID does not exist" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.724714 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" path="/var/lib/kubelet/pods/f3f94370-8ffb-4a67-9042-898ee37ed2a8/volumes" Feb 19 06:03:19 crc kubenswrapper[5012]: I0219 06:03:19.703446 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:03:19 crc kubenswrapper[5012]: E0219 06:03:19.704279 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:03:33 crc kubenswrapper[5012]: I0219 06:03:33.703548 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:03:33 crc kubenswrapper[5012]: E0219 06:03:33.704666 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:03:47 crc kubenswrapper[5012]: I0219 06:03:47.704713 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:03:47 crc kubenswrapper[5012]: E0219 06:03:47.706627 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:04:01 crc kubenswrapper[5012]: I0219 06:04:01.702810 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:04:01 crc kubenswrapper[5012]: E0219 06:04:01.703832 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:04:12 crc kubenswrapper[5012]: I0219 06:04:12.703776 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:04:12 crc kubenswrapper[5012]: E0219 06:04:12.704821 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:04:14 crc kubenswrapper[5012]: I0219 06:04:14.811433 5012 generic.go:334] "Generic (PLEG): container finished" podID="fcace677-35b0-499f-998c-99168fbfa0af" containerID="ead8d5cbfbadc07cdc6949287d7eaad0d3adb71e861dbd504d482651d9e45f96" exitCode=0 Feb 19 06:04:14 crc kubenswrapper[5012]: I0219 06:04:14.811579 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" event={"ID":"fcace677-35b0-499f-998c-99168fbfa0af","Type":"ContainerDied","Data":"ead8d5cbfbadc07cdc6949287d7eaad0d3adb71e861dbd504d482651d9e45f96"} Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.284211 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.348101 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-combined-ca-bundle\") pod \"fcace677-35b0-499f-998c-99168fbfa0af\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.348297 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-secret-0\") pod \"fcace677-35b0-499f-998c-99168fbfa0af\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.348429 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-ssh-key-openstack-edpm-ipam\") pod \"fcace677-35b0-499f-998c-99168fbfa0af\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.348955 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-inventory\") pod \"fcace677-35b0-499f-998c-99168fbfa0af\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.349055 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g9n7\" (UniqueName: \"kubernetes.io/projected/fcace677-35b0-499f-998c-99168fbfa0af-kube-api-access-6g9n7\") pod \"fcace677-35b0-499f-998c-99168fbfa0af\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.356647 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcace677-35b0-499f-998c-99168fbfa0af-kube-api-access-6g9n7" (OuterVolumeSpecName: "kube-api-access-6g9n7") pod "fcace677-35b0-499f-998c-99168fbfa0af" (UID: "fcace677-35b0-499f-998c-99168fbfa0af"). InnerVolumeSpecName "kube-api-access-6g9n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.357106 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "fcace677-35b0-499f-998c-99168fbfa0af" (UID: "fcace677-35b0-499f-998c-99168fbfa0af"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.380164 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-inventory" (OuterVolumeSpecName: "inventory") pod "fcace677-35b0-499f-998c-99168fbfa0af" (UID: "fcace677-35b0-499f-998c-99168fbfa0af"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.393842 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fcace677-35b0-499f-998c-99168fbfa0af" (UID: "fcace677-35b0-499f-998c-99168fbfa0af"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.408004 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "fcace677-35b0-499f-998c-99168fbfa0af" (UID: "fcace677-35b0-499f-998c-99168fbfa0af"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.452479 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.452514 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g9n7\" (UniqueName: \"kubernetes.io/projected/fcace677-35b0-499f-998c-99168fbfa0af-kube-api-access-6g9n7\") on node \"crc\" DevicePath \"\"" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.452525 5012 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.452535 5012 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.452544 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.836022 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" event={"ID":"fcace677-35b0-499f-998c-99168fbfa0af","Type":"ContainerDied","Data":"845fa55489eb1ebebf023adf297c3cff09eae6d31e26dd2248e57ae7baeee857"} Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.836447 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="845fa55489eb1ebebf023adf297c3cff09eae6d31e26dd2248e57ae7baeee857" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.836162 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.957545 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4"] Feb 19 06:04:16 crc kubenswrapper[5012]: E0219 06:04:16.958223 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerName="extract-content" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.958255 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerName="extract-content" Feb 19 06:04:16 crc kubenswrapper[5012]: E0219 06:04:16.958280 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcace677-35b0-499f-998c-99168fbfa0af" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.958294 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcace677-35b0-499f-998c-99168fbfa0af" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 06:04:16 crc kubenswrapper[5012]: E0219 06:04:16.958355 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerName="registry-server" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.958368 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerName="registry-server" Feb 19 06:04:16 crc kubenswrapper[5012]: E0219 06:04:16.958405 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerName="extract-utilities" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.958418 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerName="extract-utilities" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.958743 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcace677-35b0-499f-998c-99168fbfa0af" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.958814 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerName="registry-server" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.959986 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.962661 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.963948 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.964142 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.964437 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.964622 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.964632 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.970768 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4"] Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.971694 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.065689 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.065744 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a6116441-2985-4723-9889-6c3422159243-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.065814 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.065839 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.065897 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.065953 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.066032 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.066129 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.066150 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrwww\" (UniqueName: \"kubernetes.io/projected/a6116441-2985-4723-9889-6c3422159243-kube-api-access-mrwww\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.066171 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.066450 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.168587 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.168636 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrwww\" (UniqueName: \"kubernetes.io/projected/a6116441-2985-4723-9889-6c3422159243-kube-api-access-mrwww\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.168677 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.168786 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.168866 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.168909 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a6116441-2985-4723-9889-6c3422159243-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.168981 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.169016 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.169091 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.169119 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.169153 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.170820 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a6116441-2985-4723-9889-6c3422159243-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.173622 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.174966 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.175800 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.176268 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.177143 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.177272 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.178850 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.179388 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.188426 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.196180 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrwww\" (UniqueName: \"kubernetes.io/projected/a6116441-2985-4723-9889-6c3422159243-kube-api-access-mrwww\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.284001 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.863718 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4"] Feb 19 06:04:18 crc kubenswrapper[5012]: I0219 06:04:18.863058 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" event={"ID":"a6116441-2985-4723-9889-6c3422159243","Type":"ContainerStarted","Data":"967e14daca86ede14e132cc858325fa0f57a4633145bebf5ee02898a3d72c1e2"} Feb 19 06:04:18 crc kubenswrapper[5012]: I0219 06:04:18.863835 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" event={"ID":"a6116441-2985-4723-9889-6c3422159243","Type":"ContainerStarted","Data":"5de839f474f5b20e3d0844ff7d1bc3e34f78d929794f0ed3f351fca954643e98"} Feb 19 06:04:18 crc kubenswrapper[5012]: I0219 06:04:18.894050 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" podStartSLOduration=2.421512818 podStartE2EDuration="2.89401592s" podCreationTimestamp="2026-02-19 06:04:16 +0000 UTC" firstStartedPulling="2026-02-19 06:04:17.866954598 +0000 UTC m=+2353.900277207" lastFinishedPulling="2026-02-19 06:04:18.33945771 +0000 UTC m=+2354.372780309" observedRunningTime="2026-02-19 06:04:18.880685343 +0000 UTC m=+2354.914007942" watchObservedRunningTime="2026-02-19 06:04:18.89401592 +0000 UTC m=+2354.927338529" Feb 19 06:04:25 crc kubenswrapper[5012]: I0219 06:04:25.703464 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:04:25 crc kubenswrapper[5012]: E0219 06:04:25.704567 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:04:36 crc kubenswrapper[5012]: I0219 06:04:36.703393 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:04:36 crc kubenswrapper[5012]: E0219 06:04:36.704634 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:04:48 crc kubenswrapper[5012]: I0219 06:04:48.702928 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:04:48 crc kubenswrapper[5012]: E0219 06:04:48.703689 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:04:59 crc kubenswrapper[5012]: I0219 06:04:59.703391 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:04:59 crc kubenswrapper[5012]: E0219 06:04:59.704635 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:05:12 crc kubenswrapper[5012]: I0219 06:05:12.703548 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:05:12 crc kubenswrapper[5012]: E0219 06:05:12.704732 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:05:25 crc kubenswrapper[5012]: I0219 06:05:25.702901 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:05:25 crc kubenswrapper[5012]: E0219 06:05:25.703726 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:05:37 crc kubenswrapper[5012]: I0219 06:05:37.704792 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:05:37 crc kubenswrapper[5012]: E0219 06:05:37.706228 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:05:51 crc kubenswrapper[5012]: I0219 06:05:51.703662 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:05:51 crc kubenswrapper[5012]: E0219 06:05:51.704953 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:06:02 crc kubenswrapper[5012]: I0219 06:06:02.703634 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:06:02 crc kubenswrapper[5012]: E0219 06:06:02.704422 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.691951 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ctxn5"] Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.696011 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.723238 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ctxn5"] Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.804221 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4s7m\" (UniqueName: \"kubernetes.io/projected/c880ffe9-ca26-4a2a-bab2-3343004ff665-kube-api-access-w4s7m\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.804321 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-catalog-content\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.804343 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-utilities\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.906087 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4s7m\" (UniqueName: \"kubernetes.io/projected/c880ffe9-ca26-4a2a-bab2-3343004ff665-kube-api-access-w4s7m\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.906197 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-catalog-content\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.906225 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-utilities\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.906679 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-catalog-content\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.906794 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-utilities\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.928159 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4s7m\" (UniqueName: \"kubernetes.io/projected/c880ffe9-ca26-4a2a-bab2-3343004ff665-kube-api-access-w4s7m\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:13 crc kubenswrapper[5012]: I0219 06:06:13.074797 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:13 crc kubenswrapper[5012]: I0219 06:06:13.618259 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ctxn5"] Feb 19 06:06:14 crc kubenswrapper[5012]: I0219 06:06:14.312466 5012 generic.go:334] "Generic (PLEG): container finished" podID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerID="8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde" exitCode=0 Feb 19 06:06:14 crc kubenswrapper[5012]: I0219 06:06:14.312527 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctxn5" event={"ID":"c880ffe9-ca26-4a2a-bab2-3343004ff665","Type":"ContainerDied","Data":"8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde"} Feb 19 06:06:14 crc kubenswrapper[5012]: I0219 06:06:14.312857 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctxn5" event={"ID":"c880ffe9-ca26-4a2a-bab2-3343004ff665","Type":"ContainerStarted","Data":"db61563b27fc6fc93701d7a6fee141f2ddcf944bdc688ed83b67029233dc21be"} Feb 19 06:06:16 crc kubenswrapper[5012]: I0219 06:06:16.334532 5012 generic.go:334] "Generic (PLEG): container finished" podID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerID="2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12" exitCode=0 Feb 19 06:06:16 crc kubenswrapper[5012]: I0219 06:06:16.334603 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctxn5" event={"ID":"c880ffe9-ca26-4a2a-bab2-3343004ff665","Type":"ContainerDied","Data":"2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12"} Feb 19 06:06:17 crc kubenswrapper[5012]: I0219 06:06:17.347662 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctxn5" event={"ID":"c880ffe9-ca26-4a2a-bab2-3343004ff665","Type":"ContainerStarted","Data":"4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3"} Feb 19 06:06:17 crc kubenswrapper[5012]: I0219 06:06:17.371641 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ctxn5" podStartSLOduration=2.905620774 podStartE2EDuration="5.371626377s" podCreationTimestamp="2026-02-19 06:06:12 +0000 UTC" firstStartedPulling="2026-02-19 06:06:14.314615422 +0000 UTC m=+2470.347937991" lastFinishedPulling="2026-02-19 06:06:16.780621025 +0000 UTC m=+2472.813943594" observedRunningTime="2026-02-19 06:06:17.367048625 +0000 UTC m=+2473.400371184" watchObservedRunningTime="2026-02-19 06:06:17.371626377 +0000 UTC m=+2473.404948946" Feb 19 06:06:17 crc kubenswrapper[5012]: I0219 06:06:17.703720 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:06:18 crc kubenswrapper[5012]: I0219 06:06:18.359052 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"8f0e2de409f869f343439fd788a0683b28b6e560ce8f601661640064fc2c4afc"} Feb 19 06:06:23 crc kubenswrapper[5012]: I0219 06:06:23.075566 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:23 crc kubenswrapper[5012]: I0219 06:06:23.077119 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:23 crc kubenswrapper[5012]: I0219 06:06:23.125862 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:23 crc kubenswrapper[5012]: I0219 06:06:23.474281 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:25 crc kubenswrapper[5012]: I0219 06:06:25.892539 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ctxn5"] Feb 19 06:06:26 crc kubenswrapper[5012]: I0219 06:06:26.435359 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ctxn5" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerName="registry-server" containerID="cri-o://4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3" gracePeriod=2 Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.006438 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.111127 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-catalog-content\") pod \"c880ffe9-ca26-4a2a-bab2-3343004ff665\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.111181 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4s7m\" (UniqueName: \"kubernetes.io/projected/c880ffe9-ca26-4a2a-bab2-3343004ff665-kube-api-access-w4s7m\") pod \"c880ffe9-ca26-4a2a-bab2-3343004ff665\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.111259 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-utilities\") pod \"c880ffe9-ca26-4a2a-bab2-3343004ff665\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.112554 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-utilities" (OuterVolumeSpecName: "utilities") pod "c880ffe9-ca26-4a2a-bab2-3343004ff665" (UID: "c880ffe9-ca26-4a2a-bab2-3343004ff665"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.116750 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c880ffe9-ca26-4a2a-bab2-3343004ff665-kube-api-access-w4s7m" (OuterVolumeSpecName: "kube-api-access-w4s7m") pod "c880ffe9-ca26-4a2a-bab2-3343004ff665" (UID: "c880ffe9-ca26-4a2a-bab2-3343004ff665"). InnerVolumeSpecName "kube-api-access-w4s7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.175521 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c880ffe9-ca26-4a2a-bab2-3343004ff665" (UID: "c880ffe9-ca26-4a2a-bab2-3343004ff665"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.213727 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.214063 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.214074 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4s7m\" (UniqueName: \"kubernetes.io/projected/c880ffe9-ca26-4a2a-bab2-3343004ff665-kube-api-access-w4s7m\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.450736 5012 generic.go:334] "Generic (PLEG): container finished" podID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerID="4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3" exitCode=0 Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.450779 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctxn5" event={"ID":"c880ffe9-ca26-4a2a-bab2-3343004ff665","Type":"ContainerDied","Data":"4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3"} Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.450806 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctxn5" event={"ID":"c880ffe9-ca26-4a2a-bab2-3343004ff665","Type":"ContainerDied","Data":"db61563b27fc6fc93701d7a6fee141f2ddcf944bdc688ed83b67029233dc21be"} Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.450837 5012 scope.go:117] "RemoveContainer" containerID="4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.450865 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.497597 5012 scope.go:117] "RemoveContainer" containerID="2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.530204 5012 scope.go:117] "RemoveContainer" containerID="8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.530364 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ctxn5"] Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.551783 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ctxn5"] Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.574804 5012 scope.go:117] "RemoveContainer" containerID="4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3" Feb 19 06:06:27 crc kubenswrapper[5012]: E0219 06:06:27.575405 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3\": container with ID starting with 4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3 not found: ID does not exist" containerID="4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.575467 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3"} err="failed to get container status \"4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3\": rpc error: code = NotFound desc = could not find container \"4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3\": container with ID starting with 4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3 not found: ID does not exist" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.575500 5012 scope.go:117] "RemoveContainer" containerID="2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12" Feb 19 06:06:27 crc kubenswrapper[5012]: E0219 06:06:27.576267 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12\": container with ID starting with 2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12 not found: ID does not exist" containerID="2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.576356 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12"} err="failed to get container status \"2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12\": rpc error: code = NotFound desc = could not find container \"2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12\": container with ID starting with 2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12 not found: ID does not exist" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.576389 5012 scope.go:117] "RemoveContainer" containerID="8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde" Feb 19 06:06:27 crc kubenswrapper[5012]: E0219 06:06:27.576686 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde\": container with ID starting with 8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde not found: ID does not exist" containerID="8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.576713 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde"} err="failed to get container status \"8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde\": rpc error: code = NotFound desc = could not find container \"8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde\": container with ID starting with 8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde not found: ID does not exist" Feb 19 06:06:28 crc kubenswrapper[5012]: I0219 06:06:28.716345 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" path="/var/lib/kubelet/pods/c880ffe9-ca26-4a2a-bab2-3343004ff665/volumes" Feb 19 06:06:50 crc kubenswrapper[5012]: I0219 06:06:50.744136 5012 generic.go:334] "Generic (PLEG): container finished" podID="a6116441-2985-4723-9889-6c3422159243" containerID="967e14daca86ede14e132cc858325fa0f57a4633145bebf5ee02898a3d72c1e2" exitCode=0 Feb 19 06:06:50 crc kubenswrapper[5012]: I0219 06:06:50.744657 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" event={"ID":"a6116441-2985-4723-9889-6c3422159243","Type":"ContainerDied","Data":"967e14daca86ede14e132cc858325fa0f57a4633145bebf5ee02898a3d72c1e2"} Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.262836 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.384265 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-inventory\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.384360 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-0\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.384396 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrwww\" (UniqueName: \"kubernetes.io/projected/a6116441-2985-4723-9889-6c3422159243-kube-api-access-mrwww\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.384437 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-ssh-key-openstack-edpm-ipam\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.384457 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-2\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.384529 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a6116441-2985-4723-9889-6c3422159243-nova-extra-config-0\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.384592 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-0\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.384672 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-1\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.385365 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-combined-ca-bundle\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.385401 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-3\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.385467 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-1\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.393778 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6116441-2985-4723-9889-6c3422159243-kube-api-access-mrwww" (OuterVolumeSpecName: "kube-api-access-mrwww") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "kube-api-access-mrwww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.396442 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.434427 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.435454 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.440852 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-inventory" (OuterVolumeSpecName: "inventory") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.444177 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.457872 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.463883 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.464694 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.466924 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.467436 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6116441-2985-4723-9889-6c3422159243-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489266 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489352 5012 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489370 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrwww\" (UniqueName: \"kubernetes.io/projected/a6116441-2985-4723-9889-6c3422159243-kube-api-access-mrwww\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489383 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489396 5012 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489408 5012 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a6116441-2985-4723-9889-6c3422159243-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489420 5012 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489432 5012 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489443 5012 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489454 5012 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489465 5012 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.802174 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" event={"ID":"a6116441-2985-4723-9889-6c3422159243","Type":"ContainerDied","Data":"5de839f474f5b20e3d0844ff7d1bc3e34f78d929794f0ed3f351fca954643e98"} Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.802210 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5de839f474f5b20e3d0844ff7d1bc3e34f78d929794f0ed3f351fca954643e98" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.802461 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.897714 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx"] Feb 19 06:06:52 crc kubenswrapper[5012]: E0219 06:06:52.898129 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerName="extract-utilities" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.898145 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerName="extract-utilities" Feb 19 06:06:52 crc kubenswrapper[5012]: E0219 06:06:52.898157 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6116441-2985-4723-9889-6c3422159243" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.898165 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6116441-2985-4723-9889-6c3422159243" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 06:06:52 crc kubenswrapper[5012]: E0219 06:06:52.898176 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerName="registry-server" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.898181 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerName="registry-server" Feb 19 06:06:52 crc kubenswrapper[5012]: E0219 06:06:52.898205 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerName="extract-content" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.898212 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerName="extract-content" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.898424 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerName="registry-server" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.898439 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6116441-2985-4723-9889-6c3422159243" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.899071 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.903787 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.903917 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.903937 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.903874 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.904257 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.908655 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx"] Feb 19 06:06:52 crc kubenswrapper[5012]: E0219 06:06:52.964697 5012 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6116441_2985_4723_9889_6c3422159243.slice\": RecentStats: unable to find data in memory cache]" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.001606 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.001660 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.001695 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.002124 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.002233 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.002299 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.002422 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m6vg\" (UniqueName: \"kubernetes.io/projected/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-kube-api-access-9m6vg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.103710 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.103781 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.103829 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.103877 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m6vg\" (UniqueName: \"kubernetes.io/projected/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-kube-api-access-9m6vg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.103911 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.103943 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.103980 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.107602 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.108111 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.108926 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.109763 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.110147 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.110717 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.122347 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m6vg\" (UniqueName: \"kubernetes.io/projected/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-kube-api-access-9m6vg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.246258 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.876701 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx"] Feb 19 06:06:53 crc kubenswrapper[5012]: W0219 06:06:53.882992 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73fe066f_3ee6_4ffc_aeb4_874c14fb0b84.slice/crio-10f98d940b12a27091c615ddb113b42fc9f4de3bee9bf6a32525d3715e64dd37 WatchSource:0}: Error finding container 10f98d940b12a27091c615ddb113b42fc9f4de3bee9bf6a32525d3715e64dd37: Status 404 returned error can't find the container with id 10f98d940b12a27091c615ddb113b42fc9f4de3bee9bf6a32525d3715e64dd37 Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.886529 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:06:54 crc kubenswrapper[5012]: I0219 06:06:54.838402 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" event={"ID":"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84","Type":"ContainerStarted","Data":"66c2389b42efddeb455e935a3386251734490b5a184dbebd586b025e56124a97"} Feb 19 06:06:54 crc kubenswrapper[5012]: I0219 06:06:54.838692 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" event={"ID":"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84","Type":"ContainerStarted","Data":"10f98d940b12a27091c615ddb113b42fc9f4de3bee9bf6a32525d3715e64dd37"} Feb 19 06:06:54 crc kubenswrapper[5012]: I0219 06:06:54.863849 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" podStartSLOduration=2.416563312 podStartE2EDuration="2.863825697s" podCreationTimestamp="2026-02-19 06:06:52 +0000 UTC" firstStartedPulling="2026-02-19 06:06:53.886261296 +0000 UTC m=+2509.919583875" lastFinishedPulling="2026-02-19 06:06:54.333523651 +0000 UTC m=+2510.366846260" observedRunningTime="2026-02-19 06:06:54.857529533 +0000 UTC m=+2510.890852182" watchObservedRunningTime="2026-02-19 06:06:54.863825697 +0000 UTC m=+2510.897148286" Feb 19 06:08:44 crc kubenswrapper[5012]: I0219 06:08:44.430826 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:08:44 crc kubenswrapper[5012]: I0219 06:08:44.431467 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:08:56 crc kubenswrapper[5012]: I0219 06:08:56.251506 5012 generic.go:334] "Generic (PLEG): container finished" podID="73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" containerID="66c2389b42efddeb455e935a3386251734490b5a184dbebd586b025e56124a97" exitCode=0 Feb 19 06:08:56 crc kubenswrapper[5012]: I0219 06:08:56.251576 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" event={"ID":"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84","Type":"ContainerDied","Data":"66c2389b42efddeb455e935a3386251734490b5a184dbebd586b025e56124a97"} Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.718393 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.865445 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m6vg\" (UniqueName: \"kubernetes.io/projected/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-kube-api-access-9m6vg\") pod \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.865891 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-inventory\") pod \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.866659 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-telemetry-combined-ca-bundle\") pod \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.866814 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-2\") pod \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.866895 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-0\") pod \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.867030 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ssh-key-openstack-edpm-ipam\") pod \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.867144 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-1\") pod \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.872021 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" (UID: "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.873331 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-kube-api-access-9m6vg" (OuterVolumeSpecName: "kube-api-access-9m6vg") pod "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" (UID: "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84"). InnerVolumeSpecName "kube-api-access-9m6vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.909874 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-inventory" (OuterVolumeSpecName: "inventory") pod "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" (UID: "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.910146 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" (UID: "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.915805 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" (UID: "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.916331 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" (UID: "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.933478 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" (UID: "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.970929 5012 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.970990 5012 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.971017 5012 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.971038 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.971058 5012 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.971079 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m6vg\" (UniqueName: \"kubernetes.io/projected/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-kube-api-access-9m6vg\") on node \"crc\" DevicePath \"\"" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.971099 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 06:08:58 crc kubenswrapper[5012]: I0219 06:08:58.278000 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" event={"ID":"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84","Type":"ContainerDied","Data":"10f98d940b12a27091c615ddb113b42fc9f4de3bee9bf6a32525d3715e64dd37"} Feb 19 06:08:58 crc kubenswrapper[5012]: I0219 06:08:58.278051 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10f98d940b12a27091c615ddb113b42fc9f4de3bee9bf6a32525d3715e64dd37" Feb 19 06:08:58 crc kubenswrapper[5012]: I0219 06:08:58.278132 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:09:14 crc kubenswrapper[5012]: I0219 06:09:14.431193 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:09:14 crc kubenswrapper[5012]: I0219 06:09:14.432278 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.381506 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5nvpj"] Feb 19 06:09:38 crc kubenswrapper[5012]: E0219 06:09:38.382953 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.382979 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.383374 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.392103 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.396744 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5nvpj"] Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.494407 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-utilities\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.494861 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-catalog-content\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.494891 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp8np\" (UniqueName: \"kubernetes.io/projected/594190bd-cf27-4446-b5b9-7fb84361c200-kube-api-access-sp8np\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.597118 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-utilities\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.597292 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-catalog-content\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.597386 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp8np\" (UniqueName: \"kubernetes.io/projected/594190bd-cf27-4446-b5b9-7fb84361c200-kube-api-access-sp8np\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.597905 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-utilities\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.598013 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-catalog-content\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.638616 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp8np\" (UniqueName: \"kubernetes.io/projected/594190bd-cf27-4446-b5b9-7fb84361c200-kube-api-access-sp8np\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.742869 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.912366 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.913070 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="prometheus" containerID="cri-o://2911dc6ac75bd4dfdfed36bc08cc01049520edecc0e49a7a619bb704bce3f33a" gracePeriod=600 Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.913355 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="thanos-sidecar" containerID="cri-o://2854f6610edd35f9918bcf970a2c86698cd9bdd18894ce4faa0b91d3747adc47" gracePeriod=600 Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.913462 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="config-reloader" containerID="cri-o://fd666beb3889b82cdcffe025f5999afc68f3be8d81898ab269494cf52c444649" gracePeriod=600 Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.287844 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5nvpj"] Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.742703 5012 generic.go:334] "Generic (PLEG): container finished" podID="594190bd-cf27-4446-b5b9-7fb84361c200" containerID="219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89" exitCode=0 Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.742787 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nvpj" event={"ID":"594190bd-cf27-4446-b5b9-7fb84361c200","Type":"ContainerDied","Data":"219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89"} Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.743839 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nvpj" event={"ID":"594190bd-cf27-4446-b5b9-7fb84361c200","Type":"ContainerStarted","Data":"535e54cd69617ddc350f2f90e81fe2153053b3c460de6253cf093e44a7c59c54"} Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.752019 5012 generic.go:334] "Generic (PLEG): container finished" podID="8509cc68-c35e-47ea-a634-896143d747ed" containerID="2854f6610edd35f9918bcf970a2c86698cd9bdd18894ce4faa0b91d3747adc47" exitCode=0 Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.752056 5012 generic.go:334] "Generic (PLEG): container finished" podID="8509cc68-c35e-47ea-a634-896143d747ed" containerID="fd666beb3889b82cdcffe025f5999afc68f3be8d81898ab269494cf52c444649" exitCode=0 Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.752070 5012 generic.go:334] "Generic (PLEG): container finished" podID="8509cc68-c35e-47ea-a634-896143d747ed" containerID="2911dc6ac75bd4dfdfed36bc08cc01049520edecc0e49a7a619bb704bce3f33a" exitCode=0 Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.752101 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerDied","Data":"2854f6610edd35f9918bcf970a2c86698cd9bdd18894ce4faa0b91d3747adc47"} Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.752143 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerDied","Data":"fd666beb3889b82cdcffe025f5999afc68f3be8d81898ab269494cf52c444649"} Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.752158 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerDied","Data":"2911dc6ac75bd4dfdfed36bc08cc01049520edecc0e49a7a619bb704bce3f33a"} Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.966301 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.032575 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-tls-assets\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.032641 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.032747 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.032813 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8509cc68-c35e-47ea-a634-896143d747ed-config-out\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.032887 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.032916 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-2\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.033040 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq95l\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-kube-api-access-tq95l\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.033082 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-1\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.033199 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-secret-combined-ca-bundle\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.033242 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-config\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.033269 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-thanos-prometheus-http-client-file\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.033334 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-0\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.033474 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.038712 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.042236 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.042482 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.045548 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.045741 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-config" (OuterVolumeSpecName: "config") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.048512 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.048686 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8509cc68-c35e-47ea-a634-896143d747ed-config-out" (OuterVolumeSpecName: "config-out") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.052374 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.053085 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.067100 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-kube-api-access-tq95l" (OuterVolumeSpecName: "kube-api-access-tq95l") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "kube-api-access-tq95l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.067359 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.092489 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "pvc-7fbf442c-c467-48a5-9a2f-86a74d778584". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135701 5012 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135733 5012 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8509cc68-c35e-47ea-a634-896143d747ed-config-out\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135744 5012 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135755 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq95l\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-kube-api-access-tq95l\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135766 5012 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135778 5012 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135786 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-config\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135795 5012 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135804 5012 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135839 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") on node \"crc\" " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135850 5012 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135860 5012 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.147972 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config" (OuterVolumeSpecName: "web-config") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.159938 5012 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.160076 5012 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7fbf442c-c467-48a5-9a2f-86a74d778584" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584") on node "crc" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.243129 5012 reconciler_common.go:293] "Volume detached for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.243166 5012 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.771061 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerDied","Data":"c258d2d68f577aa99acf781abe70e8c1f0bea84a31b7c56b2eca30c2af015cb5"} Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.771446 5012 scope.go:117] "RemoveContainer" containerID="2854f6610edd35f9918bcf970a2c86698cd9bdd18894ce4faa0b91d3747adc47" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.771191 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.815524 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.832789 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.838809 5012 scope.go:117] "RemoveContainer" containerID="fd666beb3889b82cdcffe025f5999afc68f3be8d81898ab269494cf52c444649" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.866698 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 06:09:40 crc kubenswrapper[5012]: E0219 06:09:40.872897 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="thanos-sidecar" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.872915 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="thanos-sidecar" Feb 19 06:09:40 crc kubenswrapper[5012]: E0219 06:09:40.872928 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="prometheus" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.872936 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="prometheus" Feb 19 06:09:40 crc kubenswrapper[5012]: E0219 06:09:40.872954 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="init-config-reloader" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.872961 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="init-config-reloader" Feb 19 06:09:40 crc kubenswrapper[5012]: E0219 06:09:40.872990 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="config-reloader" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.872997 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="config-reloader" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.873197 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="prometheus" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.873221 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="config-reloader" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.873238 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="thanos-sidecar" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.879196 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.888514 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.888708 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.888976 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.889103 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.889203 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.889452 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7bqtw" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.889587 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.893513 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.894732 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.907897 5012 scope.go:117] "RemoveContainer" containerID="2911dc6ac75bd4dfdfed36bc08cc01049520edecc0e49a7a619bb704bce3f33a" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.947003 5012 scope.go:117] "RemoveContainer" containerID="4ee433ab916c49fcf886f80ee6ab1bd1a03ffacf8d9e4d295c0b15de25056e64" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.977644 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.977705 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.977808 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978136 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqzlj\" (UniqueName: \"kubernetes.io/projected/a64b2810-4982-43ef-ae9f-1e7852394d60-kube-api-access-wqzlj\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978354 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978526 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978652 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978682 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978715 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978827 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978863 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-config\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978919 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a64b2810-4982-43ef-ae9f-1e7852394d60-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978960 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a64b2810-4982-43ef-ae9f-1e7852394d60-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081413 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081499 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081544 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081566 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081592 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081632 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081653 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-config\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081684 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a64b2810-4982-43ef-ae9f-1e7852394d60-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081710 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a64b2810-4982-43ef-ae9f-1e7852394d60-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081750 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081776 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081814 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081873 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqzlj\" (UniqueName: \"kubernetes.io/projected/a64b2810-4982-43ef-ae9f-1e7852394d60-kube-api-access-wqzlj\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.083116 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.083272 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.084419 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.088262 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.088575 5012 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.088617 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/80266977aa18e8991458f1f7d5520b709fb21586520e915bbacb4bc2380e455f/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.088962 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.089347 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a64b2810-4982-43ef-ae9f-1e7852394d60-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.090334 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a64b2810-4982-43ef-ae9f-1e7852394d60-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.090627 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.090920 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.093791 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.094887 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-config\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.101979 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqzlj\" (UniqueName: \"kubernetes.io/projected/a64b2810-4982-43ef-ae9f-1e7852394d60-kube-api-access-wqzlj\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.140853 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.216748 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.705088 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.794346 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a64b2810-4982-43ef-ae9f-1e7852394d60","Type":"ContainerStarted","Data":"42ebd1835f5a5dcad484994cd62ab30919c257d2732704e6785d8ab7c963c43f"} Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.796634 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nvpj" event={"ID":"594190bd-cf27-4446-b5b9-7fb84361c200","Type":"ContainerStarted","Data":"00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927"} Feb 19 06:09:42 crc kubenswrapper[5012]: I0219 06:09:42.717674 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8509cc68-c35e-47ea-a634-896143d747ed" path="/var/lib/kubelet/pods/8509cc68-c35e-47ea-a634-896143d747ed/volumes" Feb 19 06:09:44 crc kubenswrapper[5012]: I0219 06:09:44.431109 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:09:44 crc kubenswrapper[5012]: I0219 06:09:44.431211 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:09:44 crc kubenswrapper[5012]: I0219 06:09:44.431280 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:09:44 crc kubenswrapper[5012]: I0219 06:09:44.432452 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8f0e2de409f869f343439fd788a0683b28b6e560ce8f601661640064fc2c4afc"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:09:44 crc kubenswrapper[5012]: I0219 06:09:44.432537 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://8f0e2de409f869f343439fd788a0683b28b6e560ce8f601661640064fc2c4afc" gracePeriod=600 Feb 19 06:09:45 crc kubenswrapper[5012]: I0219 06:09:45.844836 5012 generic.go:334] "Generic (PLEG): container finished" podID="594190bd-cf27-4446-b5b9-7fb84361c200" containerID="00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927" exitCode=0 Feb 19 06:09:45 crc kubenswrapper[5012]: I0219 06:09:45.844948 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nvpj" event={"ID":"594190bd-cf27-4446-b5b9-7fb84361c200","Type":"ContainerDied","Data":"00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927"} Feb 19 06:09:45 crc kubenswrapper[5012]: I0219 06:09:45.849574 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="8f0e2de409f869f343439fd788a0683b28b6e560ce8f601661640064fc2c4afc" exitCode=0 Feb 19 06:09:45 crc kubenswrapper[5012]: I0219 06:09:45.849615 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"8f0e2de409f869f343439fd788a0683b28b6e560ce8f601661640064fc2c4afc"} Feb 19 06:09:45 crc kubenswrapper[5012]: I0219 06:09:45.849655 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:09:46 crc kubenswrapper[5012]: I0219 06:09:46.861848 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nvpj" event={"ID":"594190bd-cf27-4446-b5b9-7fb84361c200","Type":"ContainerStarted","Data":"a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38"} Feb 19 06:09:46 crc kubenswrapper[5012]: I0219 06:09:46.865929 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc"} Feb 19 06:09:46 crc kubenswrapper[5012]: I0219 06:09:46.869980 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a64b2810-4982-43ef-ae9f-1e7852394d60","Type":"ContainerStarted","Data":"9ff7a9eb8aea62d1cd4f806571829782a58d9fc03c69bc5ba47ce2d1097c2287"} Feb 19 06:09:46 crc kubenswrapper[5012]: I0219 06:09:46.892335 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5nvpj" podStartSLOduration=2.420594742 podStartE2EDuration="8.892291144s" podCreationTimestamp="2026-02-19 06:09:38 +0000 UTC" firstStartedPulling="2026-02-19 06:09:39.745414119 +0000 UTC m=+2675.778736688" lastFinishedPulling="2026-02-19 06:09:46.217110531 +0000 UTC m=+2682.250433090" observedRunningTime="2026-02-19 06:09:46.883266194 +0000 UTC m=+2682.916588783" watchObservedRunningTime="2026-02-19 06:09:46.892291144 +0000 UTC m=+2682.925613733" Feb 19 06:09:48 crc kubenswrapper[5012]: I0219 06:09:48.743597 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:48 crc kubenswrapper[5012]: I0219 06:09:48.744186 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:49 crc kubenswrapper[5012]: I0219 06:09:49.867203 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5nvpj" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="registry-server" probeResult="failure" output=< Feb 19 06:09:49 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 06:09:49 crc kubenswrapper[5012]: > Feb 19 06:09:55 crc kubenswrapper[5012]: I0219 06:09:55.972545 5012 generic.go:334] "Generic (PLEG): container finished" podID="a64b2810-4982-43ef-ae9f-1e7852394d60" containerID="9ff7a9eb8aea62d1cd4f806571829782a58d9fc03c69bc5ba47ce2d1097c2287" exitCode=0 Feb 19 06:09:55 crc kubenswrapper[5012]: I0219 06:09:55.972664 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a64b2810-4982-43ef-ae9f-1e7852394d60","Type":"ContainerDied","Data":"9ff7a9eb8aea62d1cd4f806571829782a58d9fc03c69bc5ba47ce2d1097c2287"} Feb 19 06:09:56 crc kubenswrapper[5012]: I0219 06:09:56.989270 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a64b2810-4982-43ef-ae9f-1e7852394d60","Type":"ContainerStarted","Data":"fa47c8cd3d12d9ab3e651390eb20631a26da141672dc66baca2e8273fbec7049"} Feb 19 06:09:59 crc kubenswrapper[5012]: I0219 06:09:59.074965 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:59 crc kubenswrapper[5012]: I0219 06:09:59.152187 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:59 crc kubenswrapper[5012]: I0219 06:09:59.322350 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5nvpj"] Feb 19 06:10:00 crc kubenswrapper[5012]: I0219 06:10:00.027236 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a64b2810-4982-43ef-ae9f-1e7852394d60","Type":"ContainerStarted","Data":"b991f0143fb8025e1241a5d23e1357686a0bd21e0ac51c7607aae018d4b1a95c"} Feb 19 06:10:00 crc kubenswrapper[5012]: I0219 06:10:00.027278 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a64b2810-4982-43ef-ae9f-1e7852394d60","Type":"ContainerStarted","Data":"1bb48c199630c15cb81e441508a5db5b10e38e9d52685d4e60840e94f989da67"} Feb 19 06:10:00 crc kubenswrapper[5012]: I0219 06:10:00.057363 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.057344417 podStartE2EDuration="20.057344417s" podCreationTimestamp="2026-02-19 06:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 06:10:00.054428926 +0000 UTC m=+2696.087751495" watchObservedRunningTime="2026-02-19 06:10:00.057344417 +0000 UTC m=+2696.090666986" Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.035327 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5nvpj" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="registry-server" containerID="cri-o://a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38" gracePeriod=2 Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.217576 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.598869 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.770673 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-catalog-content\") pod \"594190bd-cf27-4446-b5b9-7fb84361c200\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.770961 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp8np\" (UniqueName: \"kubernetes.io/projected/594190bd-cf27-4446-b5b9-7fb84361c200-kube-api-access-sp8np\") pod \"594190bd-cf27-4446-b5b9-7fb84361c200\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.772084 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-utilities\") pod \"594190bd-cf27-4446-b5b9-7fb84361c200\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.772944 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-utilities" (OuterVolumeSpecName: "utilities") pod "594190bd-cf27-4446-b5b9-7fb84361c200" (UID: "594190bd-cf27-4446-b5b9-7fb84361c200"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.777387 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594190bd-cf27-4446-b5b9-7fb84361c200-kube-api-access-sp8np" (OuterVolumeSpecName: "kube-api-access-sp8np") pod "594190bd-cf27-4446-b5b9-7fb84361c200" (UID: "594190bd-cf27-4446-b5b9-7fb84361c200"). InnerVolumeSpecName "kube-api-access-sp8np". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.875077 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.875117 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp8np\" (UniqueName: \"kubernetes.io/projected/594190bd-cf27-4446-b5b9-7fb84361c200-kube-api-access-sp8np\") on node \"crc\" DevicePath \"\"" Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.880984 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "594190bd-cf27-4446-b5b9-7fb84361c200" (UID: "594190bd-cf27-4446-b5b9-7fb84361c200"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.977046 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.051136 5012 generic.go:334] "Generic (PLEG): container finished" podID="594190bd-cf27-4446-b5b9-7fb84361c200" containerID="a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38" exitCode=0 Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.051236 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.051348 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nvpj" event={"ID":"594190bd-cf27-4446-b5b9-7fb84361c200","Type":"ContainerDied","Data":"a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38"} Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.051392 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nvpj" event={"ID":"594190bd-cf27-4446-b5b9-7fb84361c200","Type":"ContainerDied","Data":"535e54cd69617ddc350f2f90e81fe2153053b3c460de6253cf093e44a7c59c54"} Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.051421 5012 scope.go:117] "RemoveContainer" containerID="a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.096805 5012 scope.go:117] "RemoveContainer" containerID="00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.114239 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5nvpj"] Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.128976 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5nvpj"] Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.147962 5012 scope.go:117] "RemoveContainer" containerID="219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.194698 5012 scope.go:117] "RemoveContainer" containerID="a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38" Feb 19 06:10:02 crc kubenswrapper[5012]: E0219 06:10:02.195477 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38\": container with ID starting with a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38 not found: ID does not exist" containerID="a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.195550 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38"} err="failed to get container status \"a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38\": rpc error: code = NotFound desc = could not find container \"a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38\": container with ID starting with a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38 not found: ID does not exist" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.195604 5012 scope.go:117] "RemoveContainer" containerID="00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927" Feb 19 06:10:02 crc kubenswrapper[5012]: E0219 06:10:02.196165 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927\": container with ID starting with 00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927 not found: ID does not exist" containerID="00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.196206 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927"} err="failed to get container status \"00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927\": rpc error: code = NotFound desc = could not find container \"00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927\": container with ID starting with 00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927 not found: ID does not exist" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.196233 5012 scope.go:117] "RemoveContainer" containerID="219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89" Feb 19 06:10:02 crc kubenswrapper[5012]: E0219 06:10:02.196722 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89\": container with ID starting with 219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89 not found: ID does not exist" containerID="219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.196770 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89"} err="failed to get container status \"219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89\": rpc error: code = NotFound desc = could not find container \"219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89\": container with ID starting with 219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89 not found: ID does not exist" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.723711 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" path="/var/lib/kubelet/pods/594190bd-cf27-4446-b5b9-7fb84361c200/volumes" Feb 19 06:10:11 crc kubenswrapper[5012]: I0219 06:10:11.217197 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 06:10:11 crc kubenswrapper[5012]: I0219 06:10:11.227056 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 06:10:12 crc kubenswrapper[5012]: I0219 06:10:12.194699 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.783622 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 06:10:30 crc kubenswrapper[5012]: E0219 06:10:30.786774 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="extract-utilities" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.786904 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="extract-utilities" Feb 19 06:10:30 crc kubenswrapper[5012]: E0219 06:10:30.787013 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="extract-content" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.787085 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="extract-content" Feb 19 06:10:30 crc kubenswrapper[5012]: E0219 06:10:30.787169 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="registry-server" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.787253 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="registry-server" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.787657 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="registry-server" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.788723 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.792339 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-s2ths" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.792448 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.792519 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.792641 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.798647 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.936079 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.936744 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.936841 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8klk\" (UniqueName: \"kubernetes.io/projected/54eccb09-b3ec-45bc-8065-4c5eb9516257-kube-api-access-b8klk\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.937047 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.937112 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.937488 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.937531 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-config-data\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.937574 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.937609 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.039005 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.039058 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8klk\" (UniqueName: \"kubernetes.io/projected/54eccb09-b3ec-45bc-8065-4c5eb9516257-kube-api-access-b8klk\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.039493 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.039540 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.040516 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.040553 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-config-data\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.040587 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.040575 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.040611 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.040727 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.041517 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.041857 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.042584 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.044748 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-config-data\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.046035 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.050515 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.052699 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.058725 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8klk\" (UniqueName: \"kubernetes.io/projected/54eccb09-b3ec-45bc-8065-4c5eb9516257-kube-api-access-b8klk\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.080190 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.131761 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.691875 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 06:10:32 crc kubenswrapper[5012]: I0219 06:10:32.432059 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"54eccb09-b3ec-45bc-8065-4c5eb9516257","Type":"ContainerStarted","Data":"4d40402dd6566caf396779f17c2dbfad2df685b1f64caf3b6b294fc60c0aaaea"} Feb 19 06:10:47 crc kubenswrapper[5012]: I0219 06:10:47.593362 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"54eccb09-b3ec-45bc-8065-4c5eb9516257","Type":"ContainerStarted","Data":"45a71cb7a299afd86b43701046f8b7c089e907df4ed4d824464d2883ac4074ea"} Feb 19 06:10:47 crc kubenswrapper[5012]: I0219 06:10:47.620469 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.7256981700000003 podStartE2EDuration="18.620445688s" podCreationTimestamp="2026-02-19 06:10:29 +0000 UTC" firstStartedPulling="2026-02-19 06:10:31.704318669 +0000 UTC m=+2727.737641248" lastFinishedPulling="2026-02-19 06:10:46.599066157 +0000 UTC m=+2742.632388766" observedRunningTime="2026-02-19 06:10:47.613742534 +0000 UTC m=+2743.647065133" watchObservedRunningTime="2026-02-19 06:10:47.620445688 +0000 UTC m=+2743.653768287" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.663073 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mkgb5"] Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.665651 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.677777 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-utilities\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.677959 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f5mm\" (UniqueName: \"kubernetes.io/projected/29d004a3-f380-4697-98f8-55fcb4d82038-kube-api-access-9f5mm\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.678023 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-catalog-content\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.695706 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkgb5"] Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.780344 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f5mm\" (UniqueName: \"kubernetes.io/projected/29d004a3-f380-4697-98f8-55fcb4d82038-kube-api-access-9f5mm\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.780436 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-catalog-content\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.780538 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-utilities\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.781094 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-utilities\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.781174 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-catalog-content\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.800693 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f5mm\" (UniqueName: \"kubernetes.io/projected/29d004a3-f380-4697-98f8-55fcb4d82038-kube-api-access-9f5mm\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.988854 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:05 crc kubenswrapper[5012]: I0219 06:12:05.517236 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkgb5"] Feb 19 06:12:06 crc kubenswrapper[5012]: I0219 06:12:06.491724 5012 generic.go:334] "Generic (PLEG): container finished" podID="29d004a3-f380-4697-98f8-55fcb4d82038" containerID="dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad" exitCode=0 Feb 19 06:12:06 crc kubenswrapper[5012]: I0219 06:12:06.491782 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkgb5" event={"ID":"29d004a3-f380-4697-98f8-55fcb4d82038","Type":"ContainerDied","Data":"dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad"} Feb 19 06:12:06 crc kubenswrapper[5012]: I0219 06:12:06.491839 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkgb5" event={"ID":"29d004a3-f380-4697-98f8-55fcb4d82038","Type":"ContainerStarted","Data":"9a8603d7a2167d5833dce35420e4665df5733ee3cbe5e1cf78479691ea6b660f"} Feb 19 06:12:06 crc kubenswrapper[5012]: I0219 06:12:06.494729 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:12:08 crc kubenswrapper[5012]: I0219 06:12:08.513373 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkgb5" event={"ID":"29d004a3-f380-4697-98f8-55fcb4d82038","Type":"ContainerStarted","Data":"c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d"} Feb 19 06:12:10 crc kubenswrapper[5012]: I0219 06:12:10.536901 5012 generic.go:334] "Generic (PLEG): container finished" podID="29d004a3-f380-4697-98f8-55fcb4d82038" containerID="c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d" exitCode=0 Feb 19 06:12:10 crc kubenswrapper[5012]: I0219 06:12:10.537002 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkgb5" event={"ID":"29d004a3-f380-4697-98f8-55fcb4d82038","Type":"ContainerDied","Data":"c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d"} Feb 19 06:12:11 crc kubenswrapper[5012]: I0219 06:12:11.551693 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkgb5" event={"ID":"29d004a3-f380-4697-98f8-55fcb4d82038","Type":"ContainerStarted","Data":"d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988"} Feb 19 06:12:11 crc kubenswrapper[5012]: I0219 06:12:11.574274 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mkgb5" podStartSLOduration=3.126061717 podStartE2EDuration="7.574260641s" podCreationTimestamp="2026-02-19 06:12:04 +0000 UTC" firstStartedPulling="2026-02-19 06:12:06.494206415 +0000 UTC m=+2822.527528994" lastFinishedPulling="2026-02-19 06:12:10.942405309 +0000 UTC m=+2826.975727918" observedRunningTime="2026-02-19 06:12:11.573965254 +0000 UTC m=+2827.607287863" watchObservedRunningTime="2026-02-19 06:12:11.574260641 +0000 UTC m=+2827.607583210" Feb 19 06:12:14 crc kubenswrapper[5012]: I0219 06:12:14.431043 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:12:14 crc kubenswrapper[5012]: I0219 06:12:14.431130 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:12:14 crc kubenswrapper[5012]: I0219 06:12:14.989213 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:14 crc kubenswrapper[5012]: I0219 06:12:14.989531 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:15 crc kubenswrapper[5012]: I0219 06:12:15.061793 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:25 crc kubenswrapper[5012]: I0219 06:12:25.063670 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:25 crc kubenswrapper[5012]: I0219 06:12:25.123497 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkgb5"] Feb 19 06:12:25 crc kubenswrapper[5012]: I0219 06:12:25.725496 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mkgb5" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" containerName="registry-server" containerID="cri-o://d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988" gracePeriod=2 Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.215248 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.370998 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f5mm\" (UniqueName: \"kubernetes.io/projected/29d004a3-f380-4697-98f8-55fcb4d82038-kube-api-access-9f5mm\") pod \"29d004a3-f380-4697-98f8-55fcb4d82038\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.371201 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-utilities\") pod \"29d004a3-f380-4697-98f8-55fcb4d82038\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.371345 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-catalog-content\") pod \"29d004a3-f380-4697-98f8-55fcb4d82038\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.372185 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-utilities" (OuterVolumeSpecName: "utilities") pod "29d004a3-f380-4697-98f8-55fcb4d82038" (UID: "29d004a3-f380-4697-98f8-55fcb4d82038"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.385162 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d004a3-f380-4697-98f8-55fcb4d82038-kube-api-access-9f5mm" (OuterVolumeSpecName: "kube-api-access-9f5mm") pod "29d004a3-f380-4697-98f8-55fcb4d82038" (UID: "29d004a3-f380-4697-98f8-55fcb4d82038"). InnerVolumeSpecName "kube-api-access-9f5mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.431199 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29d004a3-f380-4697-98f8-55fcb4d82038" (UID: "29d004a3-f380-4697-98f8-55fcb4d82038"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.473843 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.473875 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f5mm\" (UniqueName: \"kubernetes.io/projected/29d004a3-f380-4697-98f8-55fcb4d82038-kube-api-access-9f5mm\") on node \"crc\" DevicePath \"\"" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.473890 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.740559 5012 generic.go:334] "Generic (PLEG): container finished" podID="29d004a3-f380-4697-98f8-55fcb4d82038" containerID="d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988" exitCode=0 Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.740780 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkgb5" event={"ID":"29d004a3-f380-4697-98f8-55fcb4d82038","Type":"ContainerDied","Data":"d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988"} Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.740908 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkgb5" event={"ID":"29d004a3-f380-4697-98f8-55fcb4d82038","Type":"ContainerDied","Data":"9a8603d7a2167d5833dce35420e4665df5733ee3cbe5e1cf78479691ea6b660f"} Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.740934 5012 scope.go:117] "RemoveContainer" containerID="d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.740958 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.778449 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkgb5"] Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.783897 5012 scope.go:117] "RemoveContainer" containerID="c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.788354 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mkgb5"] Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.819086 5012 scope.go:117] "RemoveContainer" containerID="dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.866226 5012 scope.go:117] "RemoveContainer" containerID="d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988" Feb 19 06:12:26 crc kubenswrapper[5012]: E0219 06:12:26.866697 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988\": container with ID starting with d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988 not found: ID does not exist" containerID="d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.866777 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988"} err="failed to get container status \"d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988\": rpc error: code = NotFound desc = could not find container \"d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988\": container with ID starting with d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988 not found: ID does not exist" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.866809 5012 scope.go:117] "RemoveContainer" containerID="c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d" Feb 19 06:12:26 crc kubenswrapper[5012]: E0219 06:12:26.867324 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d\": container with ID starting with c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d not found: ID does not exist" containerID="c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.867354 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d"} err="failed to get container status \"c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d\": rpc error: code = NotFound desc = could not find container \"c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d\": container with ID starting with c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d not found: ID does not exist" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.867376 5012 scope.go:117] "RemoveContainer" containerID="dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad" Feb 19 06:12:26 crc kubenswrapper[5012]: E0219 06:12:26.867654 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad\": container with ID starting with dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad not found: ID does not exist" containerID="dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.867689 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad"} err="failed to get container status \"dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad\": rpc error: code = NotFound desc = could not find container \"dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad\": container with ID starting with dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad not found: ID does not exist" Feb 19 06:12:28 crc kubenswrapper[5012]: I0219 06:12:28.722624 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" path="/var/lib/kubelet/pods/29d004a3-f380-4697-98f8-55fcb4d82038/volumes" Feb 19 06:12:44 crc kubenswrapper[5012]: I0219 06:12:44.431238 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:12:44 crc kubenswrapper[5012]: I0219 06:12:44.431896 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:13:14 crc kubenswrapper[5012]: I0219 06:13:14.430951 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:13:14 crc kubenswrapper[5012]: I0219 06:13:14.431620 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:13:14 crc kubenswrapper[5012]: I0219 06:13:14.431683 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:13:14 crc kubenswrapper[5012]: I0219 06:13:14.432458 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:13:14 crc kubenswrapper[5012]: I0219 06:13:14.432557 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" gracePeriod=600 Feb 19 06:13:14 crc kubenswrapper[5012]: E0219 06:13:14.557727 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:13:15 crc kubenswrapper[5012]: I0219 06:13:15.285909 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" exitCode=0 Feb 19 06:13:15 crc kubenswrapper[5012]: I0219 06:13:15.285987 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc"} Feb 19 06:13:15 crc kubenswrapper[5012]: I0219 06:13:15.286037 5012 scope.go:117] "RemoveContainer" containerID="8f0e2de409f869f343439fd788a0683b28b6e560ce8f601661640064fc2c4afc" Feb 19 06:13:15 crc kubenswrapper[5012]: I0219 06:13:15.288908 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:13:15 crc kubenswrapper[5012]: E0219 06:13:15.289548 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:13:29 crc kubenswrapper[5012]: I0219 06:13:29.703166 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:13:29 crc kubenswrapper[5012]: E0219 06:13:29.704209 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:13:41 crc kubenswrapper[5012]: I0219 06:13:41.703422 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:13:41 crc kubenswrapper[5012]: E0219 06:13:41.704945 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:13:52 crc kubenswrapper[5012]: I0219 06:13:52.704202 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:13:52 crc kubenswrapper[5012]: E0219 06:13:52.705368 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:14:05 crc kubenswrapper[5012]: I0219 06:14:05.703168 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:14:05 crc kubenswrapper[5012]: E0219 06:14:05.704276 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.376139 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsc4"] Feb 19 06:14:15 crc kubenswrapper[5012]: E0219 06:14:15.377291 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" containerName="extract-content" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.377330 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" containerName="extract-content" Feb 19 06:14:15 crc kubenswrapper[5012]: E0219 06:14:15.377357 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" containerName="extract-utilities" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.377365 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" containerName="extract-utilities" Feb 19 06:14:15 crc kubenswrapper[5012]: E0219 06:14:15.377389 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" containerName="registry-server" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.377399 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" containerName="registry-server" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.377624 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" containerName="registry-server" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.379649 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.407763 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsc4"] Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.495716 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2xdh\" (UniqueName: \"kubernetes.io/projected/ac664a0d-6329-4f30-b172-8251efffc897-kube-api-access-p2xdh\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.495862 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-utilities\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.495957 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-catalog-content\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.597911 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2xdh\" (UniqueName: \"kubernetes.io/projected/ac664a0d-6329-4f30-b172-8251efffc897-kube-api-access-p2xdh\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.598483 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-utilities\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.599075 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-utilities\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.599262 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-catalog-content\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.599664 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-catalog-content\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.619624 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2xdh\" (UniqueName: \"kubernetes.io/projected/ac664a0d-6329-4f30-b172-8251efffc897-kube-api-access-p2xdh\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.748784 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:16 crc kubenswrapper[5012]: I0219 06:14:16.220794 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsc4"] Feb 19 06:14:17 crc kubenswrapper[5012]: I0219 06:14:17.094626 5012 generic.go:334] "Generic (PLEG): container finished" podID="ac664a0d-6329-4f30-b172-8251efffc897" containerID="fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a" exitCode=0 Feb 19 06:14:17 crc kubenswrapper[5012]: I0219 06:14:17.094735 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsc4" event={"ID":"ac664a0d-6329-4f30-b172-8251efffc897","Type":"ContainerDied","Data":"fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a"} Feb 19 06:14:17 crc kubenswrapper[5012]: I0219 06:14:17.094994 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsc4" event={"ID":"ac664a0d-6329-4f30-b172-8251efffc897","Type":"ContainerStarted","Data":"d4c09887c9bcdda41a18bc0f97b29786c4c22d3d1291fc9396998df058674684"} Feb 19 06:14:17 crc kubenswrapper[5012]: I0219 06:14:17.703442 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:14:17 crc kubenswrapper[5012]: E0219 06:14:17.704274 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:14:18 crc kubenswrapper[5012]: I0219 06:14:18.125617 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsc4" event={"ID":"ac664a0d-6329-4f30-b172-8251efffc897","Type":"ContainerStarted","Data":"ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a"} Feb 19 06:14:19 crc kubenswrapper[5012]: I0219 06:14:19.142488 5012 generic.go:334] "Generic (PLEG): container finished" podID="ac664a0d-6329-4f30-b172-8251efffc897" containerID="ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a" exitCode=0 Feb 19 06:14:19 crc kubenswrapper[5012]: I0219 06:14:19.142745 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsc4" event={"ID":"ac664a0d-6329-4f30-b172-8251efffc897","Type":"ContainerDied","Data":"ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a"} Feb 19 06:14:20 crc kubenswrapper[5012]: I0219 06:14:20.156273 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsc4" event={"ID":"ac664a0d-6329-4f30-b172-8251efffc897","Type":"ContainerStarted","Data":"39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6"} Feb 19 06:14:20 crc kubenswrapper[5012]: I0219 06:14:20.181032 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bzsc4" podStartSLOduration=2.717482419 podStartE2EDuration="5.181004195s" podCreationTimestamp="2026-02-19 06:14:15 +0000 UTC" firstStartedPulling="2026-02-19 06:14:17.097744764 +0000 UTC m=+2953.131067373" lastFinishedPulling="2026-02-19 06:14:19.56126654 +0000 UTC m=+2955.594589149" observedRunningTime="2026-02-19 06:14:20.180631686 +0000 UTC m=+2956.213954265" watchObservedRunningTime="2026-02-19 06:14:20.181004195 +0000 UTC m=+2956.214326804" Feb 19 06:14:25 crc kubenswrapper[5012]: I0219 06:14:25.749370 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:25 crc kubenswrapper[5012]: I0219 06:14:25.750029 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:25 crc kubenswrapper[5012]: I0219 06:14:25.832703 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:26 crc kubenswrapper[5012]: I0219 06:14:26.306511 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:26 crc kubenswrapper[5012]: I0219 06:14:26.379677 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsc4"] Feb 19 06:14:28 crc kubenswrapper[5012]: I0219 06:14:28.250195 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bzsc4" podUID="ac664a0d-6329-4f30-b172-8251efffc897" containerName="registry-server" containerID="cri-o://39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6" gracePeriod=2 Feb 19 06:14:28 crc kubenswrapper[5012]: I0219 06:14:28.900676 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.058339 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-utilities\") pod \"ac664a0d-6329-4f30-b172-8251efffc897\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.058460 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-catalog-content\") pod \"ac664a0d-6329-4f30-b172-8251efffc897\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.058493 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2xdh\" (UniqueName: \"kubernetes.io/projected/ac664a0d-6329-4f30-b172-8251efffc897-kube-api-access-p2xdh\") pod \"ac664a0d-6329-4f30-b172-8251efffc897\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.060201 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-utilities" (OuterVolumeSpecName: "utilities") pod "ac664a0d-6329-4f30-b172-8251efffc897" (UID: "ac664a0d-6329-4f30-b172-8251efffc897"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.067813 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac664a0d-6329-4f30-b172-8251efffc897-kube-api-access-p2xdh" (OuterVolumeSpecName: "kube-api-access-p2xdh") pod "ac664a0d-6329-4f30-b172-8251efffc897" (UID: "ac664a0d-6329-4f30-b172-8251efffc897"). InnerVolumeSpecName "kube-api-access-p2xdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.086221 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac664a0d-6329-4f30-b172-8251efffc897" (UID: "ac664a0d-6329-4f30-b172-8251efffc897"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.160409 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.160441 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.160451 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2xdh\" (UniqueName: \"kubernetes.io/projected/ac664a0d-6329-4f30-b172-8251efffc897-kube-api-access-p2xdh\") on node \"crc\" DevicePath \"\"" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.268822 5012 generic.go:334] "Generic (PLEG): container finished" podID="ac664a0d-6329-4f30-b172-8251efffc897" containerID="39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6" exitCode=0 Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.268871 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsc4" event={"ID":"ac664a0d-6329-4f30-b172-8251efffc897","Type":"ContainerDied","Data":"39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6"} Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.268955 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsc4" event={"ID":"ac664a0d-6329-4f30-b172-8251efffc897","Type":"ContainerDied","Data":"d4c09887c9bcdda41a18bc0f97b29786c4c22d3d1291fc9396998df058674684"} Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.268994 5012 scope.go:117] "RemoveContainer" containerID="39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.271265 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.300679 5012 scope.go:117] "RemoveContainer" containerID="ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.339076 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsc4"] Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.353485 5012 scope.go:117] "RemoveContainer" containerID="fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.354825 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsc4"] Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.403735 5012 scope.go:117] "RemoveContainer" containerID="39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6" Feb 19 06:14:29 crc kubenswrapper[5012]: E0219 06:14:29.404332 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6\": container with ID starting with 39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6 not found: ID does not exist" containerID="39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.404393 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6"} err="failed to get container status \"39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6\": rpc error: code = NotFound desc = could not find container \"39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6\": container with ID starting with 39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6 not found: ID does not exist" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.404429 5012 scope.go:117] "RemoveContainer" containerID="ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a" Feb 19 06:14:29 crc kubenswrapper[5012]: E0219 06:14:29.404870 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a\": container with ID starting with ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a not found: ID does not exist" containerID="ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.404924 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a"} err="failed to get container status \"ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a\": rpc error: code = NotFound desc = could not find container \"ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a\": container with ID starting with ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a not found: ID does not exist" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.404965 5012 scope.go:117] "RemoveContainer" containerID="fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a" Feb 19 06:14:29 crc kubenswrapper[5012]: E0219 06:14:29.405285 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a\": container with ID starting with fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a not found: ID does not exist" containerID="fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.405357 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a"} err="failed to get container status \"fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a\": rpc error: code = NotFound desc = could not find container \"fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a\": container with ID starting with fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a not found: ID does not exist" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.703087 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:14:29 crc kubenswrapper[5012]: E0219 06:14:29.703973 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:14:30 crc kubenswrapper[5012]: I0219 06:14:30.723189 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac664a0d-6329-4f30-b172-8251efffc897" path="/var/lib/kubelet/pods/ac664a0d-6329-4f30-b172-8251efffc897/volumes" Feb 19 06:14:44 crc kubenswrapper[5012]: I0219 06:14:44.715120 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:14:44 crc kubenswrapper[5012]: E0219 06:14:44.716192 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:14:57 crc kubenswrapper[5012]: I0219 06:14:57.703581 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:14:57 crc kubenswrapper[5012]: E0219 06:14:57.704770 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.161517 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt"] Feb 19 06:15:00 crc kubenswrapper[5012]: E0219 06:15:00.162792 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac664a0d-6329-4f30-b172-8251efffc897" containerName="registry-server" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.162818 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac664a0d-6329-4f30-b172-8251efffc897" containerName="registry-server" Feb 19 06:15:00 crc kubenswrapper[5012]: E0219 06:15:00.162856 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac664a0d-6329-4f30-b172-8251efffc897" containerName="extract-content" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.162865 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac664a0d-6329-4f30-b172-8251efffc897" containerName="extract-content" Feb 19 06:15:00 crc kubenswrapper[5012]: E0219 06:15:00.162890 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac664a0d-6329-4f30-b172-8251efffc897" containerName="extract-utilities" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.162900 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac664a0d-6329-4f30-b172-8251efffc897" containerName="extract-utilities" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.163247 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac664a0d-6329-4f30-b172-8251efffc897" containerName="registry-server" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.164490 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.167359 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.167937 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.191963 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt"] Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.277488 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d1557b7-91d6-4aac-8306-59d97142a76c-secret-volume\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.277728 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d1557b7-91d6-4aac-8306-59d97142a76c-config-volume\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.278038 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tnwc\" (UniqueName: \"kubernetes.io/projected/0d1557b7-91d6-4aac-8306-59d97142a76c-kube-api-access-4tnwc\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.379846 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d1557b7-91d6-4aac-8306-59d97142a76c-secret-volume\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.379973 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d1557b7-91d6-4aac-8306-59d97142a76c-config-volume\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.380034 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tnwc\" (UniqueName: \"kubernetes.io/projected/0d1557b7-91d6-4aac-8306-59d97142a76c-kube-api-access-4tnwc\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.380859 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d1557b7-91d6-4aac-8306-59d97142a76c-config-volume\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.387760 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d1557b7-91d6-4aac-8306-59d97142a76c-secret-volume\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.412968 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tnwc\" (UniqueName: \"kubernetes.io/projected/0d1557b7-91d6-4aac-8306-59d97142a76c-kube-api-access-4tnwc\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.497537 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:01 crc kubenswrapper[5012]: I0219 06:15:01.023250 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt"] Feb 19 06:15:01 crc kubenswrapper[5012]: I0219 06:15:01.665796 5012 generic.go:334] "Generic (PLEG): container finished" podID="0d1557b7-91d6-4aac-8306-59d97142a76c" containerID="1a2cb819f1490aeaeb6e29cd5e196789ce8e9978f4d9987b6edfc7cea46ee158" exitCode=0 Feb 19 06:15:01 crc kubenswrapper[5012]: I0219 06:15:01.665915 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" event={"ID":"0d1557b7-91d6-4aac-8306-59d97142a76c","Type":"ContainerDied","Data":"1a2cb819f1490aeaeb6e29cd5e196789ce8e9978f4d9987b6edfc7cea46ee158"} Feb 19 06:15:01 crc kubenswrapper[5012]: I0219 06:15:01.666129 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" event={"ID":"0d1557b7-91d6-4aac-8306-59d97142a76c","Type":"ContainerStarted","Data":"20d1ff5b35a75cf787a735e30d46894c93d5313a7ebc1210e95fe7fe46b5d694"} Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.129916 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.243951 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d1557b7-91d6-4aac-8306-59d97142a76c-secret-volume\") pod \"0d1557b7-91d6-4aac-8306-59d97142a76c\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.244084 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tnwc\" (UniqueName: \"kubernetes.io/projected/0d1557b7-91d6-4aac-8306-59d97142a76c-kube-api-access-4tnwc\") pod \"0d1557b7-91d6-4aac-8306-59d97142a76c\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.244176 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d1557b7-91d6-4aac-8306-59d97142a76c-config-volume\") pod \"0d1557b7-91d6-4aac-8306-59d97142a76c\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.245143 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d1557b7-91d6-4aac-8306-59d97142a76c-config-volume" (OuterVolumeSpecName: "config-volume") pod "0d1557b7-91d6-4aac-8306-59d97142a76c" (UID: "0d1557b7-91d6-4aac-8306-59d97142a76c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.249968 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d1557b7-91d6-4aac-8306-59d97142a76c-kube-api-access-4tnwc" (OuterVolumeSpecName: "kube-api-access-4tnwc") pod "0d1557b7-91d6-4aac-8306-59d97142a76c" (UID: "0d1557b7-91d6-4aac-8306-59d97142a76c"). InnerVolumeSpecName "kube-api-access-4tnwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.251332 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d1557b7-91d6-4aac-8306-59d97142a76c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0d1557b7-91d6-4aac-8306-59d97142a76c" (UID: "0d1557b7-91d6-4aac-8306-59d97142a76c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.346394 5012 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d1557b7-91d6-4aac-8306-59d97142a76c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.346423 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tnwc\" (UniqueName: \"kubernetes.io/projected/0d1557b7-91d6-4aac-8306-59d97142a76c-kube-api-access-4tnwc\") on node \"crc\" DevicePath \"\"" Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.346432 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d1557b7-91d6-4aac-8306-59d97142a76c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.701795 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" event={"ID":"0d1557b7-91d6-4aac-8306-59d97142a76c","Type":"ContainerDied","Data":"20d1ff5b35a75cf787a735e30d46894c93d5313a7ebc1210e95fe7fe46b5d694"} Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.702152 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20d1ff5b35a75cf787a735e30d46894c93d5313a7ebc1210e95fe7fe46b5d694" Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.701833 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:04 crc kubenswrapper[5012]: I0219 06:15:04.229920 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r"] Feb 19 06:15:04 crc kubenswrapper[5012]: I0219 06:15:04.238997 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r"] Feb 19 06:15:04 crc kubenswrapper[5012]: I0219 06:15:04.741287 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff63f713-7649-46d8-85cb-ef67dccf9fe6" path="/var/lib/kubelet/pods/ff63f713-7649-46d8-85cb-ef67dccf9fe6/volumes" Feb 19 06:15:10 crc kubenswrapper[5012]: I0219 06:15:10.704071 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:15:10 crc kubenswrapper[5012]: E0219 06:15:10.705480 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:15:25 crc kubenswrapper[5012]: I0219 06:15:25.703603 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:15:25 crc kubenswrapper[5012]: E0219 06:15:25.704604 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:15:37 crc kubenswrapper[5012]: I0219 06:15:37.702708 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:15:37 crc kubenswrapper[5012]: E0219 06:15:37.705071 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:15:38 crc kubenswrapper[5012]: I0219 06:15:38.416849 5012 scope.go:117] "RemoveContainer" containerID="c5d7329af46ea59d345e496a5c84f8c51fab010adcb4a91e0080f58a2ca4a9ec" Feb 19 06:15:51 crc kubenswrapper[5012]: I0219 06:15:51.703386 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:15:51 crc kubenswrapper[5012]: E0219 06:15:51.704427 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:16:04 crc kubenswrapper[5012]: I0219 06:16:04.717412 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:16:04 crc kubenswrapper[5012]: E0219 06:16:04.718973 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:16:19 crc kubenswrapper[5012]: I0219 06:16:19.702912 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:16:19 crc kubenswrapper[5012]: E0219 06:16:19.704146 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:16:32 crc kubenswrapper[5012]: I0219 06:16:32.704134 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:16:32 crc kubenswrapper[5012]: E0219 06:16:32.705212 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:16:44 crc kubenswrapper[5012]: I0219 06:16:44.711628 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:16:44 crc kubenswrapper[5012]: E0219 06:16:44.712732 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:16:56 crc kubenswrapper[5012]: I0219 06:16:56.702825 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:16:56 crc kubenswrapper[5012]: E0219 06:16:56.704000 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:17:07 crc kubenswrapper[5012]: I0219 06:17:07.703860 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:17:07 crc kubenswrapper[5012]: E0219 06:17:07.704546 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:17:20 crc kubenswrapper[5012]: I0219 06:17:20.703588 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:17:20 crc kubenswrapper[5012]: E0219 06:17:20.704956 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.374778 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cgg6f"] Feb 19 06:17:29 crc kubenswrapper[5012]: E0219 06:17:29.375817 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d1557b7-91d6-4aac-8306-59d97142a76c" containerName="collect-profiles" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.375832 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d1557b7-91d6-4aac-8306-59d97142a76c" containerName="collect-profiles" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.376061 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d1557b7-91d6-4aac-8306-59d97142a76c" containerName="collect-profiles" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.377808 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.395612 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgg6f"] Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.560394 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-utilities\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.560778 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgbd6\" (UniqueName: \"kubernetes.io/projected/d97998a6-419f-4f4a-b313-942320f12a6b-kube-api-access-kgbd6\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.560958 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-catalog-content\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.662595 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-catalog-content\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.662809 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-utilities\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.662883 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgbd6\" (UniqueName: \"kubernetes.io/projected/d97998a6-419f-4f4a-b313-942320f12a6b-kube-api-access-kgbd6\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.664141 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-catalog-content\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.664658 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-utilities\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.699110 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgbd6\" (UniqueName: \"kubernetes.io/projected/d97998a6-419f-4f4a-b313-942320f12a6b-kube-api-access-kgbd6\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.722285 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:30 crc kubenswrapper[5012]: I0219 06:17:30.242695 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgg6f"] Feb 19 06:17:30 crc kubenswrapper[5012]: I0219 06:17:30.347122 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgg6f" event={"ID":"d97998a6-419f-4f4a-b313-942320f12a6b","Type":"ContainerStarted","Data":"dd7c0c797895ea4cd89489cdd87a27a03c805333e76ec09a18a354bd79977d27"} Feb 19 06:17:31 crc kubenswrapper[5012]: I0219 06:17:31.358065 5012 generic.go:334] "Generic (PLEG): container finished" podID="d97998a6-419f-4f4a-b313-942320f12a6b" containerID="cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2" exitCode=0 Feb 19 06:17:31 crc kubenswrapper[5012]: I0219 06:17:31.358114 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgg6f" event={"ID":"d97998a6-419f-4f4a-b313-942320f12a6b","Type":"ContainerDied","Data":"cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2"} Feb 19 06:17:31 crc kubenswrapper[5012]: I0219 06:17:31.361937 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:17:31 crc kubenswrapper[5012]: I0219 06:17:31.703384 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:17:31 crc kubenswrapper[5012]: E0219 06:17:31.703752 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:17:33 crc kubenswrapper[5012]: I0219 06:17:33.387011 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgg6f" event={"ID":"d97998a6-419f-4f4a-b313-942320f12a6b","Type":"ContainerStarted","Data":"6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27"} Feb 19 06:17:34 crc kubenswrapper[5012]: I0219 06:17:34.400758 5012 generic.go:334] "Generic (PLEG): container finished" podID="d97998a6-419f-4f4a-b313-942320f12a6b" containerID="6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27" exitCode=0 Feb 19 06:17:34 crc kubenswrapper[5012]: I0219 06:17:34.400823 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgg6f" event={"ID":"d97998a6-419f-4f4a-b313-942320f12a6b","Type":"ContainerDied","Data":"6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27"} Feb 19 06:17:35 crc kubenswrapper[5012]: I0219 06:17:35.412664 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgg6f" event={"ID":"d97998a6-419f-4f4a-b313-942320f12a6b","Type":"ContainerStarted","Data":"c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef"} Feb 19 06:17:39 crc kubenswrapper[5012]: I0219 06:17:39.723462 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:39 crc kubenswrapper[5012]: I0219 06:17:39.724724 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:39 crc kubenswrapper[5012]: I0219 06:17:39.805065 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:39 crc kubenswrapper[5012]: I0219 06:17:39.844701 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cgg6f" podStartSLOduration=7.415747155 podStartE2EDuration="10.84467882s" podCreationTimestamp="2026-02-19 06:17:29 +0000 UTC" firstStartedPulling="2026-02-19 06:17:31.361326576 +0000 UTC m=+3147.394649175" lastFinishedPulling="2026-02-19 06:17:34.790258231 +0000 UTC m=+3150.823580840" observedRunningTime="2026-02-19 06:17:35.446427219 +0000 UTC m=+3151.479749818" watchObservedRunningTime="2026-02-19 06:17:39.84467882 +0000 UTC m=+3155.878001399" Feb 19 06:17:40 crc kubenswrapper[5012]: I0219 06:17:40.521220 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:40 crc kubenswrapper[5012]: I0219 06:17:40.585490 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgg6f"] Feb 19 06:17:42 crc kubenswrapper[5012]: I0219 06:17:42.486566 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cgg6f" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" containerName="registry-server" containerID="cri-o://c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef" gracePeriod=2 Feb 19 06:17:42 crc kubenswrapper[5012]: I0219 06:17:42.703134 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:17:42 crc kubenswrapper[5012]: E0219 06:17:42.703939 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.050709 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.165460 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-utilities\") pod \"d97998a6-419f-4f4a-b313-942320f12a6b\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.165746 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-catalog-content\") pod \"d97998a6-419f-4f4a-b313-942320f12a6b\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.165783 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgbd6\" (UniqueName: \"kubernetes.io/projected/d97998a6-419f-4f4a-b313-942320f12a6b-kube-api-access-kgbd6\") pod \"d97998a6-419f-4f4a-b313-942320f12a6b\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.166530 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-utilities" (OuterVolumeSpecName: "utilities") pod "d97998a6-419f-4f4a-b313-942320f12a6b" (UID: "d97998a6-419f-4f4a-b313-942320f12a6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.171243 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97998a6-419f-4f4a-b313-942320f12a6b-kube-api-access-kgbd6" (OuterVolumeSpecName: "kube-api-access-kgbd6") pod "d97998a6-419f-4f4a-b313-942320f12a6b" (UID: "d97998a6-419f-4f4a-b313-942320f12a6b"). InnerVolumeSpecName "kube-api-access-kgbd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.221726 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d97998a6-419f-4f4a-b313-942320f12a6b" (UID: "d97998a6-419f-4f4a-b313-942320f12a6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.267955 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.267994 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.268008 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgbd6\" (UniqueName: \"kubernetes.io/projected/d97998a6-419f-4f4a-b313-942320f12a6b-kube-api-access-kgbd6\") on node \"crc\" DevicePath \"\"" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.501318 5012 generic.go:334] "Generic (PLEG): container finished" podID="d97998a6-419f-4f4a-b313-942320f12a6b" containerID="c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef" exitCode=0 Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.501355 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgg6f" event={"ID":"d97998a6-419f-4f4a-b313-942320f12a6b","Type":"ContainerDied","Data":"c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef"} Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.501381 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgg6f" event={"ID":"d97998a6-419f-4f4a-b313-942320f12a6b","Type":"ContainerDied","Data":"dd7c0c797895ea4cd89489cdd87a27a03c805333e76ec09a18a354bd79977d27"} Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.501399 5012 scope.go:117] "RemoveContainer" containerID="c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.501429 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.540971 5012 scope.go:117] "RemoveContainer" containerID="6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.562488 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgg6f"] Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.575058 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cgg6f"] Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.594810 5012 scope.go:117] "RemoveContainer" containerID="cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.631965 5012 scope.go:117] "RemoveContainer" containerID="c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef" Feb 19 06:17:43 crc kubenswrapper[5012]: E0219 06:17:43.632378 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef\": container with ID starting with c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef not found: ID does not exist" containerID="c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.632425 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef"} err="failed to get container status \"c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef\": rpc error: code = NotFound desc = could not find container \"c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef\": container with ID starting with c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef not found: ID does not exist" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.632459 5012 scope.go:117] "RemoveContainer" containerID="6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27" Feb 19 06:17:43 crc kubenswrapper[5012]: E0219 06:17:43.632719 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27\": container with ID starting with 6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27 not found: ID does not exist" containerID="6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.632761 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27"} err="failed to get container status \"6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27\": rpc error: code = NotFound desc = could not find container \"6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27\": container with ID starting with 6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27 not found: ID does not exist" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.632790 5012 scope.go:117] "RemoveContainer" containerID="cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2" Feb 19 06:17:43 crc kubenswrapper[5012]: E0219 06:17:43.632999 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2\": container with ID starting with cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2 not found: ID does not exist" containerID="cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.633029 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2"} err="failed to get container status \"cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2\": rpc error: code = NotFound desc = could not find container \"cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2\": container with ID starting with cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2 not found: ID does not exist" Feb 19 06:17:44 crc kubenswrapper[5012]: I0219 06:17:44.722034 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" path="/var/lib/kubelet/pods/d97998a6-419f-4f4a-b313-942320f12a6b/volumes" Feb 19 06:17:55 crc kubenswrapper[5012]: I0219 06:17:55.704006 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:17:55 crc kubenswrapper[5012]: E0219 06:17:55.705512 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:18:10 crc kubenswrapper[5012]: I0219 06:18:10.703377 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:18:10 crc kubenswrapper[5012]: E0219 06:18:10.704463 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:18:23 crc kubenswrapper[5012]: I0219 06:18:23.711629 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:18:23 crc kubenswrapper[5012]: I0219 06:18:23.975879 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"259a14333e76f5ec2c151bbd818fe48cadcca6e9989e78b8167dd4e34241e536"} Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.701333 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4fxxm"] Feb 19 06:19:41 crc kubenswrapper[5012]: E0219 06:19:41.702293 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" containerName="registry-server" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.702323 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" containerName="registry-server" Feb 19 06:19:41 crc kubenswrapper[5012]: E0219 06:19:41.702357 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" containerName="extract-content" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.702367 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" containerName="extract-content" Feb 19 06:19:41 crc kubenswrapper[5012]: E0219 06:19:41.702398 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" containerName="extract-utilities" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.702410 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" containerName="extract-utilities" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.702669 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" containerName="registry-server" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.704689 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.715009 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fxxm"] Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.777699 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-utilities\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.777818 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-catalog-content\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.777893 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkmw8\" (UniqueName: \"kubernetes.io/projected/55648b88-ee33-485a-9b58-46b433d1397d-kube-api-access-gkmw8\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.879612 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-utilities\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.879898 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-catalog-content\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.879956 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkmw8\" (UniqueName: \"kubernetes.io/projected/55648b88-ee33-485a-9b58-46b433d1397d-kube-api-access-gkmw8\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.880801 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-utilities\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.881014 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-catalog-content\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.904478 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkmw8\" (UniqueName: \"kubernetes.io/projected/55648b88-ee33-485a-9b58-46b433d1397d-kube-api-access-gkmw8\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:42 crc kubenswrapper[5012]: I0219 06:19:42.084805 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:42 crc kubenswrapper[5012]: I0219 06:19:42.601758 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fxxm"] Feb 19 06:19:42 crc kubenswrapper[5012]: I0219 06:19:42.925142 5012 generic.go:334] "Generic (PLEG): container finished" podID="55648b88-ee33-485a-9b58-46b433d1397d" containerID="b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b" exitCode=0 Feb 19 06:19:42 crc kubenswrapper[5012]: I0219 06:19:42.925194 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fxxm" event={"ID":"55648b88-ee33-485a-9b58-46b433d1397d","Type":"ContainerDied","Data":"b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b"} Feb 19 06:19:42 crc kubenswrapper[5012]: I0219 06:19:42.926346 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fxxm" event={"ID":"55648b88-ee33-485a-9b58-46b433d1397d","Type":"ContainerStarted","Data":"5255dfd3a95ea83eee361c3aec9395da3fabe1c150f3519c2b4f0f0a37e4b8e0"} Feb 19 06:19:44 crc kubenswrapper[5012]: I0219 06:19:44.950935 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fxxm" event={"ID":"55648b88-ee33-485a-9b58-46b433d1397d","Type":"ContainerStarted","Data":"8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18"} Feb 19 06:19:52 crc kubenswrapper[5012]: I0219 06:19:52.050069 5012 generic.go:334] "Generic (PLEG): container finished" podID="55648b88-ee33-485a-9b58-46b433d1397d" containerID="8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18" exitCode=0 Feb 19 06:19:52 crc kubenswrapper[5012]: I0219 06:19:52.050202 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fxxm" event={"ID":"55648b88-ee33-485a-9b58-46b433d1397d","Type":"ContainerDied","Data":"8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18"} Feb 19 06:19:53 crc kubenswrapper[5012]: I0219 06:19:53.067939 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fxxm" event={"ID":"55648b88-ee33-485a-9b58-46b433d1397d","Type":"ContainerStarted","Data":"852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94"} Feb 19 06:20:02 crc kubenswrapper[5012]: I0219 06:20:02.085959 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:20:02 crc kubenswrapper[5012]: I0219 06:20:02.086821 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:20:03 crc kubenswrapper[5012]: I0219 06:20:03.141440 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4fxxm" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="registry-server" probeResult="failure" output=< Feb 19 06:20:03 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 06:20:03 crc kubenswrapper[5012]: > Feb 19 06:20:13 crc kubenswrapper[5012]: I0219 06:20:13.139573 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4fxxm" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="registry-server" probeResult="failure" output=< Feb 19 06:20:13 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 06:20:13 crc kubenswrapper[5012]: > Feb 19 06:20:22 crc kubenswrapper[5012]: I0219 06:20:22.164229 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:20:22 crc kubenswrapper[5012]: I0219 06:20:22.193586 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4fxxm" podStartSLOduration=31.527685488 podStartE2EDuration="41.193564978s" podCreationTimestamp="2026-02-19 06:19:41 +0000 UTC" firstStartedPulling="2026-02-19 06:19:42.926755179 +0000 UTC m=+3278.960077748" lastFinishedPulling="2026-02-19 06:19:52.592634659 +0000 UTC m=+3288.625957238" observedRunningTime="2026-02-19 06:19:53.099394838 +0000 UTC m=+3289.132717447" watchObservedRunningTime="2026-02-19 06:20:22.193564978 +0000 UTC m=+3318.226887547" Feb 19 06:20:22 crc kubenswrapper[5012]: I0219 06:20:22.233648 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:20:22 crc kubenswrapper[5012]: I0219 06:20:22.414408 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4fxxm"] Feb 19 06:20:23 crc kubenswrapper[5012]: I0219 06:20:23.371219 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4fxxm" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="registry-server" containerID="cri-o://852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94" gracePeriod=2 Feb 19 06:20:23 crc kubenswrapper[5012]: I0219 06:20:23.910202 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.047130 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-catalog-content\") pod \"55648b88-ee33-485a-9b58-46b433d1397d\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.047251 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkmw8\" (UniqueName: \"kubernetes.io/projected/55648b88-ee33-485a-9b58-46b433d1397d-kube-api-access-gkmw8\") pod \"55648b88-ee33-485a-9b58-46b433d1397d\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.047567 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-utilities\") pod \"55648b88-ee33-485a-9b58-46b433d1397d\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.048404 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-utilities" (OuterVolumeSpecName: "utilities") pod "55648b88-ee33-485a-9b58-46b433d1397d" (UID: "55648b88-ee33-485a-9b58-46b433d1397d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.048678 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.059481 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55648b88-ee33-485a-9b58-46b433d1397d-kube-api-access-gkmw8" (OuterVolumeSpecName: "kube-api-access-gkmw8") pod "55648b88-ee33-485a-9b58-46b433d1397d" (UID: "55648b88-ee33-485a-9b58-46b433d1397d"). InnerVolumeSpecName "kube-api-access-gkmw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.157476 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkmw8\" (UniqueName: \"kubernetes.io/projected/55648b88-ee33-485a-9b58-46b433d1397d-kube-api-access-gkmw8\") on node \"crc\" DevicePath \"\"" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.225135 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55648b88-ee33-485a-9b58-46b433d1397d" (UID: "55648b88-ee33-485a-9b58-46b433d1397d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.261618 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.383872 5012 generic.go:334] "Generic (PLEG): container finished" podID="55648b88-ee33-485a-9b58-46b433d1397d" containerID="852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94" exitCode=0 Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.383917 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fxxm" event={"ID":"55648b88-ee33-485a-9b58-46b433d1397d","Type":"ContainerDied","Data":"852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94"} Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.383946 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fxxm" event={"ID":"55648b88-ee33-485a-9b58-46b433d1397d","Type":"ContainerDied","Data":"5255dfd3a95ea83eee361c3aec9395da3fabe1c150f3519c2b4f0f0a37e4b8e0"} Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.383961 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.383970 5012 scope.go:117] "RemoveContainer" containerID="852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.429058 5012 scope.go:117] "RemoveContainer" containerID="8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.448018 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4fxxm"] Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.455822 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4fxxm"] Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.473791 5012 scope.go:117] "RemoveContainer" containerID="b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.547031 5012 scope.go:117] "RemoveContainer" containerID="852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94" Feb 19 06:20:24 crc kubenswrapper[5012]: E0219 06:20:24.547537 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94\": container with ID starting with 852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94 not found: ID does not exist" containerID="852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.547598 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94"} err="failed to get container status \"852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94\": rpc error: code = NotFound desc = could not find container \"852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94\": container with ID starting with 852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94 not found: ID does not exist" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.547629 5012 scope.go:117] "RemoveContainer" containerID="8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18" Feb 19 06:20:24 crc kubenswrapper[5012]: E0219 06:20:24.547922 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18\": container with ID starting with 8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18 not found: ID does not exist" containerID="8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.547962 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18"} err="failed to get container status \"8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18\": rpc error: code = NotFound desc = could not find container \"8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18\": container with ID starting with 8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18 not found: ID does not exist" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.547989 5012 scope.go:117] "RemoveContainer" containerID="b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b" Feb 19 06:20:24 crc kubenswrapper[5012]: E0219 06:20:24.548324 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b\": container with ID starting with b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b not found: ID does not exist" containerID="b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.548359 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b"} err="failed to get container status \"b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b\": rpc error: code = NotFound desc = could not find container \"b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b\": container with ID starting with b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b not found: ID does not exist" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.712174 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55648b88-ee33-485a-9b58-46b433d1397d" path="/var/lib/kubelet/pods/55648b88-ee33-485a-9b58-46b433d1397d/volumes" Feb 19 06:20:44 crc kubenswrapper[5012]: I0219 06:20:44.430505 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:20:44 crc kubenswrapper[5012]: I0219 06:20:44.431423 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:21:14 crc kubenswrapper[5012]: I0219 06:21:14.430561 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:21:14 crc kubenswrapper[5012]: I0219 06:21:14.431202 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:21:44 crc kubenswrapper[5012]: I0219 06:21:44.431144 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:21:44 crc kubenswrapper[5012]: I0219 06:21:44.432068 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:21:44 crc kubenswrapper[5012]: I0219 06:21:44.432164 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:21:44 crc kubenswrapper[5012]: I0219 06:21:44.433866 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"259a14333e76f5ec2c151bbd818fe48cadcca6e9989e78b8167dd4e34241e536"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:21:44 crc kubenswrapper[5012]: I0219 06:21:44.433976 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://259a14333e76f5ec2c151bbd818fe48cadcca6e9989e78b8167dd4e34241e536" gracePeriod=600 Feb 19 06:21:45 crc kubenswrapper[5012]: I0219 06:21:45.349474 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="259a14333e76f5ec2c151bbd818fe48cadcca6e9989e78b8167dd4e34241e536" exitCode=0 Feb 19 06:21:45 crc kubenswrapper[5012]: I0219 06:21:45.349528 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"259a14333e76f5ec2c151bbd818fe48cadcca6e9989e78b8167dd4e34241e536"} Feb 19 06:21:45 crc kubenswrapper[5012]: I0219 06:21:45.349908 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa"} Feb 19 06:21:45 crc kubenswrapper[5012]: I0219 06:21:45.349937 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:23:44 crc kubenswrapper[5012]: I0219 06:23:44.430703 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:23:44 crc kubenswrapper[5012]: I0219 06:23:44.431400 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.451743 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v5z2x"] Feb 19 06:23:46 crc kubenswrapper[5012]: E0219 06:23:46.452796 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="extract-content" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.452818 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="extract-content" Feb 19 06:23:46 crc kubenswrapper[5012]: E0219 06:23:46.452869 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="extract-utilities" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.452884 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="extract-utilities" Feb 19 06:23:46 crc kubenswrapper[5012]: E0219 06:23:46.452914 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="registry-server" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.452932 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="registry-server" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.453344 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="registry-server" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.456100 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.488600 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v5z2x"] Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.638258 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j82b\" (UniqueName: \"kubernetes.io/projected/c47a2602-b592-46cf-8452-4c06ba540a3f-kube-api-access-9j82b\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.638673 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-utilities\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.638722 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-catalog-content\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.741559 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-catalog-content\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.742122 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-catalog-content\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.742387 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j82b\" (UniqueName: \"kubernetes.io/projected/c47a2602-b592-46cf-8452-4c06ba540a3f-kube-api-access-9j82b\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.742639 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-utilities\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.743000 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-utilities\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.766193 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j82b\" (UniqueName: \"kubernetes.io/projected/c47a2602-b592-46cf-8452-4c06ba540a3f-kube-api-access-9j82b\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.791440 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:47 crc kubenswrapper[5012]: I0219 06:23:47.321705 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v5z2x"] Feb 19 06:23:47 crc kubenswrapper[5012]: I0219 06:23:47.779633 5012 generic.go:334] "Generic (PLEG): container finished" podID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerID="d7e3c8bf7e3b50487cf7e0e61bc3377a127d4db21f72dced292a5d54263fa4ee" exitCode=0 Feb 19 06:23:47 crc kubenswrapper[5012]: I0219 06:23:47.779722 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5z2x" event={"ID":"c47a2602-b592-46cf-8452-4c06ba540a3f","Type":"ContainerDied","Data":"d7e3c8bf7e3b50487cf7e0e61bc3377a127d4db21f72dced292a5d54263fa4ee"} Feb 19 06:23:47 crc kubenswrapper[5012]: I0219 06:23:47.780019 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5z2x" event={"ID":"c47a2602-b592-46cf-8452-4c06ba540a3f","Type":"ContainerStarted","Data":"fdd34cfd7a4af7091fd720c395760be22a35a636a0da0a4e96e39f9a61ed5980"} Feb 19 06:23:47 crc kubenswrapper[5012]: I0219 06:23:47.781624 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:23:49 crc kubenswrapper[5012]: I0219 06:23:49.802254 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5z2x" event={"ID":"c47a2602-b592-46cf-8452-4c06ba540a3f","Type":"ContainerStarted","Data":"6b96ddbcd9517469ebb56cd58f965d25623177e6b664a50e79bcb5defaa92b3f"} Feb 19 06:23:50 crc kubenswrapper[5012]: I0219 06:23:50.821840 5012 generic.go:334] "Generic (PLEG): container finished" podID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerID="6b96ddbcd9517469ebb56cd58f965d25623177e6b664a50e79bcb5defaa92b3f" exitCode=0 Feb 19 06:23:50 crc kubenswrapper[5012]: I0219 06:23:50.822207 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5z2x" event={"ID":"c47a2602-b592-46cf-8452-4c06ba540a3f","Type":"ContainerDied","Data":"6b96ddbcd9517469ebb56cd58f965d25623177e6b664a50e79bcb5defaa92b3f"} Feb 19 06:23:51 crc kubenswrapper[5012]: I0219 06:23:51.867895 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5z2x" event={"ID":"c47a2602-b592-46cf-8452-4c06ba540a3f","Type":"ContainerStarted","Data":"8b0e34b2146720d77c42585f15b129821d21aeac992210df13e5567a64790406"} Feb 19 06:23:51 crc kubenswrapper[5012]: I0219 06:23:51.893469 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v5z2x" podStartSLOduration=2.211185039 podStartE2EDuration="5.893450927s" podCreationTimestamp="2026-02-19 06:23:46 +0000 UTC" firstStartedPulling="2026-02-19 06:23:47.781345633 +0000 UTC m=+3523.814668202" lastFinishedPulling="2026-02-19 06:23:51.463611521 +0000 UTC m=+3527.496934090" observedRunningTime="2026-02-19 06:23:51.893246782 +0000 UTC m=+3527.926569351" watchObservedRunningTime="2026-02-19 06:23:51.893450927 +0000 UTC m=+3527.926773496" Feb 19 06:23:56 crc kubenswrapper[5012]: I0219 06:23:56.791972 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:56 crc kubenswrapper[5012]: I0219 06:23:56.792500 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:56 crc kubenswrapper[5012]: I0219 06:23:56.844527 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:56 crc kubenswrapper[5012]: I0219 06:23:56.958874 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:59 crc kubenswrapper[5012]: I0219 06:23:59.642641 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v5z2x"] Feb 19 06:23:59 crc kubenswrapper[5012]: I0219 06:23:59.643281 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v5z2x" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerName="registry-server" containerID="cri-o://8b0e34b2146720d77c42585f15b129821d21aeac992210df13e5567a64790406" gracePeriod=2 Feb 19 06:23:59 crc kubenswrapper[5012]: I0219 06:23:59.970918 5012 generic.go:334] "Generic (PLEG): container finished" podID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerID="8b0e34b2146720d77c42585f15b129821d21aeac992210df13e5567a64790406" exitCode=0 Feb 19 06:23:59 crc kubenswrapper[5012]: I0219 06:23:59.970968 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5z2x" event={"ID":"c47a2602-b592-46cf-8452-4c06ba540a3f","Type":"ContainerDied","Data":"8b0e34b2146720d77c42585f15b129821d21aeac992210df13e5567a64790406"} Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.218885 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.267358 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-catalog-content\") pod \"c47a2602-b592-46cf-8452-4c06ba540a3f\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.267647 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j82b\" (UniqueName: \"kubernetes.io/projected/c47a2602-b592-46cf-8452-4c06ba540a3f-kube-api-access-9j82b\") pod \"c47a2602-b592-46cf-8452-4c06ba540a3f\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.267717 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-utilities\") pod \"c47a2602-b592-46cf-8452-4c06ba540a3f\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.268940 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-utilities" (OuterVolumeSpecName: "utilities") pod "c47a2602-b592-46cf-8452-4c06ba540a3f" (UID: "c47a2602-b592-46cf-8452-4c06ba540a3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.274988 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c47a2602-b592-46cf-8452-4c06ba540a3f-kube-api-access-9j82b" (OuterVolumeSpecName: "kube-api-access-9j82b") pod "c47a2602-b592-46cf-8452-4c06ba540a3f" (UID: "c47a2602-b592-46cf-8452-4c06ba540a3f"). InnerVolumeSpecName "kube-api-access-9j82b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.313580 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c47a2602-b592-46cf-8452-4c06ba540a3f" (UID: "c47a2602-b592-46cf-8452-4c06ba540a3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.370669 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j82b\" (UniqueName: \"kubernetes.io/projected/c47a2602-b592-46cf-8452-4c06ba540a3f-kube-api-access-9j82b\") on node \"crc\" DevicePath \"\"" Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.370715 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.370728 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.987135 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5z2x" event={"ID":"c47a2602-b592-46cf-8452-4c06ba540a3f","Type":"ContainerDied","Data":"fdd34cfd7a4af7091fd720c395760be22a35a636a0da0a4e96e39f9a61ed5980"} Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.987575 5012 scope.go:117] "RemoveContainer" containerID="8b0e34b2146720d77c42585f15b129821d21aeac992210df13e5567a64790406" Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.987217 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:24:01 crc kubenswrapper[5012]: I0219 06:24:01.026827 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v5z2x"] Feb 19 06:24:01 crc kubenswrapper[5012]: I0219 06:24:01.034536 5012 scope.go:117] "RemoveContainer" containerID="6b96ddbcd9517469ebb56cd58f965d25623177e6b664a50e79bcb5defaa92b3f" Feb 19 06:24:01 crc kubenswrapper[5012]: I0219 06:24:01.043290 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v5z2x"] Feb 19 06:24:01 crc kubenswrapper[5012]: I0219 06:24:01.072700 5012 scope.go:117] "RemoveContainer" containerID="d7e3c8bf7e3b50487cf7e0e61bc3377a127d4db21f72dced292a5d54263fa4ee" Feb 19 06:24:02 crc kubenswrapper[5012]: I0219 06:24:02.717786 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" path="/var/lib/kubelet/pods/c47a2602-b592-46cf-8452-4c06ba540a3f/volumes" Feb 19 06:24:14 crc kubenswrapper[5012]: I0219 06:24:14.431941 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:24:14 crc kubenswrapper[5012]: I0219 06:24:14.432643 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:24:44 crc kubenswrapper[5012]: I0219 06:24:44.430761 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:24:44 crc kubenswrapper[5012]: I0219 06:24:44.431689 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:24:44 crc kubenswrapper[5012]: I0219 06:24:44.431770 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:24:44 crc kubenswrapper[5012]: I0219 06:24:44.433030 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:24:44 crc kubenswrapper[5012]: I0219 06:24:44.433136 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" gracePeriod=600 Feb 19 06:24:44 crc kubenswrapper[5012]: E0219 06:24:44.580044 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:24:45 crc kubenswrapper[5012]: I0219 06:24:45.487739 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" exitCode=0 Feb 19 06:24:45 crc kubenswrapper[5012]: I0219 06:24:45.487854 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa"} Feb 19 06:24:45 crc kubenswrapper[5012]: I0219 06:24:45.488211 5012 scope.go:117] "RemoveContainer" containerID="259a14333e76f5ec2c151bbd818fe48cadcca6e9989e78b8167dd4e34241e536" Feb 19 06:24:45 crc kubenswrapper[5012]: I0219 06:24:45.489430 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:24:45 crc kubenswrapper[5012]: E0219 06:24:45.489974 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:25:00 crc kubenswrapper[5012]: I0219 06:25:00.704184 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:25:00 crc kubenswrapper[5012]: E0219 06:25:00.705423 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:25:11 crc kubenswrapper[5012]: I0219 06:25:11.703346 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:25:11 crc kubenswrapper[5012]: E0219 06:25:11.704250 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:25:25 crc kubenswrapper[5012]: I0219 06:25:25.704161 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:25:25 crc kubenswrapper[5012]: E0219 06:25:25.705207 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.899992 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vx6fn"] Feb 19 06:25:26 crc kubenswrapper[5012]: E0219 06:25:26.900978 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerName="registry-server" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.901002 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerName="registry-server" Feb 19 06:25:26 crc kubenswrapper[5012]: E0219 06:25:26.901064 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerName="extract-content" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.901090 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerName="extract-content" Feb 19 06:25:26 crc kubenswrapper[5012]: E0219 06:25:26.901143 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerName="extract-utilities" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.901158 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerName="extract-utilities" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.901640 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerName="registry-server" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.905668 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.913241 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vx6fn"] Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.950709 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-catalog-content\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.950759 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wljpn\" (UniqueName: \"kubernetes.io/projected/172af46a-ab5f-4245-8e79-7f204418aff2-kube-api-access-wljpn\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.950807 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-utilities\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:27 crc kubenswrapper[5012]: I0219 06:25:27.052266 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-catalog-content\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:27 crc kubenswrapper[5012]: I0219 06:25:27.052334 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wljpn\" (UniqueName: \"kubernetes.io/projected/172af46a-ab5f-4245-8e79-7f204418aff2-kube-api-access-wljpn\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:27 crc kubenswrapper[5012]: I0219 06:25:27.052383 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-utilities\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:27 crc kubenswrapper[5012]: I0219 06:25:27.052995 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-utilities\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:27 crc kubenswrapper[5012]: I0219 06:25:27.052988 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-catalog-content\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:27 crc kubenswrapper[5012]: I0219 06:25:27.076036 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wljpn\" (UniqueName: \"kubernetes.io/projected/172af46a-ab5f-4245-8e79-7f204418aff2-kube-api-access-wljpn\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:27 crc kubenswrapper[5012]: I0219 06:25:27.246113 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:27 crc kubenswrapper[5012]: I0219 06:25:27.514317 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vx6fn"] Feb 19 06:25:28 crc kubenswrapper[5012]: I0219 06:25:28.045297 5012 generic.go:334] "Generic (PLEG): container finished" podID="172af46a-ab5f-4245-8e79-7f204418aff2" containerID="fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81" exitCode=0 Feb 19 06:25:28 crc kubenswrapper[5012]: I0219 06:25:28.045419 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vx6fn" event={"ID":"172af46a-ab5f-4245-8e79-7f204418aff2","Type":"ContainerDied","Data":"fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81"} Feb 19 06:25:28 crc kubenswrapper[5012]: I0219 06:25:28.045470 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vx6fn" event={"ID":"172af46a-ab5f-4245-8e79-7f204418aff2","Type":"ContainerStarted","Data":"e9b0a187167dabdfed5b0b44744a9761128c90057eb98772f7fc509641cbcffc"} Feb 19 06:25:29 crc kubenswrapper[5012]: I0219 06:25:29.058728 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vx6fn" event={"ID":"172af46a-ab5f-4245-8e79-7f204418aff2","Type":"ContainerStarted","Data":"4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236"} Feb 19 06:25:30 crc kubenswrapper[5012]: I0219 06:25:30.073673 5012 generic.go:334] "Generic (PLEG): container finished" podID="172af46a-ab5f-4245-8e79-7f204418aff2" containerID="4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236" exitCode=0 Feb 19 06:25:30 crc kubenswrapper[5012]: I0219 06:25:30.073718 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vx6fn" event={"ID":"172af46a-ab5f-4245-8e79-7f204418aff2","Type":"ContainerDied","Data":"4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236"} Feb 19 06:25:31 crc kubenswrapper[5012]: I0219 06:25:31.087284 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vx6fn" event={"ID":"172af46a-ab5f-4245-8e79-7f204418aff2","Type":"ContainerStarted","Data":"54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0"} Feb 19 06:25:31 crc kubenswrapper[5012]: I0219 06:25:31.111548 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vx6fn" podStartSLOduration=2.711265739 podStartE2EDuration="5.111531016s" podCreationTimestamp="2026-02-19 06:25:26 +0000 UTC" firstStartedPulling="2026-02-19 06:25:28.050644176 +0000 UTC m=+3624.083966785" lastFinishedPulling="2026-02-19 06:25:30.450909453 +0000 UTC m=+3626.484232062" observedRunningTime="2026-02-19 06:25:31.106045582 +0000 UTC m=+3627.139368191" watchObservedRunningTime="2026-02-19 06:25:31.111531016 +0000 UTC m=+3627.144853585" Feb 19 06:25:37 crc kubenswrapper[5012]: I0219 06:25:37.246510 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:37 crc kubenswrapper[5012]: I0219 06:25:37.247399 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:37 crc kubenswrapper[5012]: I0219 06:25:37.335841 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:38 crc kubenswrapper[5012]: I0219 06:25:38.287559 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:38 crc kubenswrapper[5012]: I0219 06:25:38.350860 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vx6fn"] Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.228757 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vx6fn" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" containerName="registry-server" containerID="cri-o://54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0" gracePeriod=2 Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.703627 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:25:40 crc kubenswrapper[5012]: E0219 06:25:40.704499 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.762469 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.925960 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-utilities\") pod \"172af46a-ab5f-4245-8e79-7f204418aff2\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.926228 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-catalog-content\") pod \"172af46a-ab5f-4245-8e79-7f204418aff2\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.926319 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wljpn\" (UniqueName: \"kubernetes.io/projected/172af46a-ab5f-4245-8e79-7f204418aff2-kube-api-access-wljpn\") pod \"172af46a-ab5f-4245-8e79-7f204418aff2\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.927491 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-utilities" (OuterVolumeSpecName: "utilities") pod "172af46a-ab5f-4245-8e79-7f204418aff2" (UID: "172af46a-ab5f-4245-8e79-7f204418aff2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.941949 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/172af46a-ab5f-4245-8e79-7f204418aff2-kube-api-access-wljpn" (OuterVolumeSpecName: "kube-api-access-wljpn") pod "172af46a-ab5f-4245-8e79-7f204418aff2" (UID: "172af46a-ab5f-4245-8e79-7f204418aff2"). InnerVolumeSpecName "kube-api-access-wljpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.963647 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "172af46a-ab5f-4245-8e79-7f204418aff2" (UID: "172af46a-ab5f-4245-8e79-7f204418aff2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.029005 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.029046 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.029064 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wljpn\" (UniqueName: \"kubernetes.io/projected/172af46a-ab5f-4245-8e79-7f204418aff2-kube-api-access-wljpn\") on node \"crc\" DevicePath \"\"" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.240289 5012 generic.go:334] "Generic (PLEG): container finished" podID="172af46a-ab5f-4245-8e79-7f204418aff2" containerID="54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0" exitCode=0 Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.240376 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vx6fn" event={"ID":"172af46a-ab5f-4245-8e79-7f204418aff2","Type":"ContainerDied","Data":"54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0"} Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.240416 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vx6fn" event={"ID":"172af46a-ab5f-4245-8e79-7f204418aff2","Type":"ContainerDied","Data":"e9b0a187167dabdfed5b0b44744a9761128c90057eb98772f7fc509641cbcffc"} Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.240426 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.240444 5012 scope.go:117] "RemoveContainer" containerID="54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.282708 5012 scope.go:117] "RemoveContainer" containerID="4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.305396 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vx6fn"] Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.315858 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vx6fn"] Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.326216 5012 scope.go:117] "RemoveContainer" containerID="fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.369578 5012 scope.go:117] "RemoveContainer" containerID="54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0" Feb 19 06:25:41 crc kubenswrapper[5012]: E0219 06:25:41.370668 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0\": container with ID starting with 54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0 not found: ID does not exist" containerID="54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.370742 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0"} err="failed to get container status \"54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0\": rpc error: code = NotFound desc = could not find container \"54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0\": container with ID starting with 54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0 not found: ID does not exist" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.370783 5012 scope.go:117] "RemoveContainer" containerID="4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236" Feb 19 06:25:41 crc kubenswrapper[5012]: E0219 06:25:41.371182 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236\": container with ID starting with 4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236 not found: ID does not exist" containerID="4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.371214 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236"} err="failed to get container status \"4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236\": rpc error: code = NotFound desc = could not find container \"4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236\": container with ID starting with 4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236 not found: ID does not exist" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.371242 5012 scope.go:117] "RemoveContainer" containerID="fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81" Feb 19 06:25:41 crc kubenswrapper[5012]: E0219 06:25:41.371706 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81\": container with ID starting with fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81 not found: ID does not exist" containerID="fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.371755 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81"} err="failed to get container status \"fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81\": rpc error: code = NotFound desc = could not find container \"fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81\": container with ID starting with fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81 not found: ID does not exist" Feb 19 06:25:42 crc kubenswrapper[5012]: I0219 06:25:42.760811 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" path="/var/lib/kubelet/pods/172af46a-ab5f-4245-8e79-7f204418aff2/volumes" Feb 19 06:25:51 crc kubenswrapper[5012]: I0219 06:25:51.703952 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:25:51 crc kubenswrapper[5012]: E0219 06:25:51.705368 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:26:04 crc kubenswrapper[5012]: I0219 06:26:04.703862 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:26:04 crc kubenswrapper[5012]: E0219 06:26:04.705899 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:26:17 crc kubenswrapper[5012]: I0219 06:26:17.704871 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:26:17 crc kubenswrapper[5012]: E0219 06:26:17.705938 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:26:32 crc kubenswrapper[5012]: I0219 06:26:32.703886 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:26:32 crc kubenswrapper[5012]: E0219 06:26:32.705646 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:26:43 crc kubenswrapper[5012]: I0219 06:26:43.703756 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:26:43 crc kubenswrapper[5012]: E0219 06:26:43.704847 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:26:54 crc kubenswrapper[5012]: I0219 06:26:54.715604 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:26:54 crc kubenswrapper[5012]: E0219 06:26:54.716281 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:27:05 crc kubenswrapper[5012]: I0219 06:27:05.703673 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:27:05 crc kubenswrapper[5012]: E0219 06:27:05.705155 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:27:18 crc kubenswrapper[5012]: I0219 06:27:18.703613 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:27:18 crc kubenswrapper[5012]: E0219 06:27:18.704269 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:27:33 crc kubenswrapper[5012]: I0219 06:27:33.703268 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:27:33 crc kubenswrapper[5012]: E0219 06:27:33.704556 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:27:45 crc kubenswrapper[5012]: I0219 06:27:45.703624 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:27:45 crc kubenswrapper[5012]: E0219 06:27:45.704717 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:28:00 crc kubenswrapper[5012]: I0219 06:28:00.709688 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:28:00 crc kubenswrapper[5012]: E0219 06:28:00.710855 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:28:03 crc kubenswrapper[5012]: I0219 06:28:03.726287 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="1fd0c672-e258-4feb-8bbd-26135f92f7fb" containerName="galera" probeResult="failure" output="command timed out" Feb 19 06:28:03 crc kubenswrapper[5012]: I0219 06:28:03.726329 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="1fd0c672-e258-4feb-8bbd-26135f92f7fb" containerName="galera" probeResult="failure" output="command timed out" Feb 19 06:28:13 crc kubenswrapper[5012]: I0219 06:28:13.703796 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:28:13 crc kubenswrapper[5012]: E0219 06:28:13.708371 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:28:25 crc kubenswrapper[5012]: I0219 06:28:25.703212 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:28:25 crc kubenswrapper[5012]: E0219 06:28:25.704376 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:28:36 crc kubenswrapper[5012]: I0219 06:28:36.703404 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:28:36 crc kubenswrapper[5012]: E0219 06:28:36.704597 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.488131 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pxf2x"] Feb 19 06:28:47 crc kubenswrapper[5012]: E0219 06:28:47.489239 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" containerName="extract-utilities" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.489255 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" containerName="extract-utilities" Feb 19 06:28:47 crc kubenswrapper[5012]: E0219 06:28:47.489286 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" containerName="registry-server" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.489294 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" containerName="registry-server" Feb 19 06:28:47 crc kubenswrapper[5012]: E0219 06:28:47.489579 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" containerName="extract-content" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.489590 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" containerName="extract-content" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.489859 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" containerName="registry-server" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.491646 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.508567 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxf2x"] Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.584705 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg6d6\" (UniqueName: \"kubernetes.io/projected/4d86775d-0772-4adf-9ed9-c7b3016d97e7-kube-api-access-xg6d6\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.584911 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d86775d-0772-4adf-9ed9-c7b3016d97e7-utilities\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.584981 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d86775d-0772-4adf-9ed9-c7b3016d97e7-catalog-content\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.687552 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d86775d-0772-4adf-9ed9-c7b3016d97e7-utilities\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.687659 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d86775d-0772-4adf-9ed9-c7b3016d97e7-catalog-content\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.687715 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg6d6\" (UniqueName: \"kubernetes.io/projected/4d86775d-0772-4adf-9ed9-c7b3016d97e7-kube-api-access-xg6d6\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.688158 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d86775d-0772-4adf-9ed9-c7b3016d97e7-utilities\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.688223 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d86775d-0772-4adf-9ed9-c7b3016d97e7-catalog-content\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.715980 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg6d6\" (UniqueName: \"kubernetes.io/projected/4d86775d-0772-4adf-9ed9-c7b3016d97e7-kube-api-access-xg6d6\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.819232 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:48 crc kubenswrapper[5012]: I0219 06:28:48.358764 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxf2x"] Feb 19 06:28:49 crc kubenswrapper[5012]: I0219 06:28:49.600889 5012 generic.go:334] "Generic (PLEG): container finished" podID="4d86775d-0772-4adf-9ed9-c7b3016d97e7" containerID="2e49cc5d1f19e547f3c40e84406ad299664d5c74bcbb965ace63f18eab5b6c2e" exitCode=0 Feb 19 06:28:49 crc kubenswrapper[5012]: I0219 06:28:49.601685 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxf2x" event={"ID":"4d86775d-0772-4adf-9ed9-c7b3016d97e7","Type":"ContainerDied","Data":"2e49cc5d1f19e547f3c40e84406ad299664d5c74bcbb965ace63f18eab5b6c2e"} Feb 19 06:28:49 crc kubenswrapper[5012]: I0219 06:28:49.601732 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxf2x" event={"ID":"4d86775d-0772-4adf-9ed9-c7b3016d97e7","Type":"ContainerStarted","Data":"7ac581084802c7d8322972f7f930fc88583a08840a6726fe7aee7a04889bb890"} Feb 19 06:28:49 crc kubenswrapper[5012]: I0219 06:28:49.605014 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:28:50 crc kubenswrapper[5012]: I0219 06:28:50.703556 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:28:50 crc kubenswrapper[5012]: E0219 06:28:50.705101 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:28:54 crc kubenswrapper[5012]: I0219 06:28:54.658083 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxf2x" event={"ID":"4d86775d-0772-4adf-9ed9-c7b3016d97e7","Type":"ContainerStarted","Data":"26f630cfb6ba7ceb8e1ec51f3107849c4ee537f733203b7bcb98c45bf30728fa"} Feb 19 06:28:55 crc kubenswrapper[5012]: I0219 06:28:55.672895 5012 generic.go:334] "Generic (PLEG): container finished" podID="4d86775d-0772-4adf-9ed9-c7b3016d97e7" containerID="26f630cfb6ba7ceb8e1ec51f3107849c4ee537f733203b7bcb98c45bf30728fa" exitCode=0 Feb 19 06:28:55 crc kubenswrapper[5012]: I0219 06:28:55.673159 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxf2x" event={"ID":"4d86775d-0772-4adf-9ed9-c7b3016d97e7","Type":"ContainerDied","Data":"26f630cfb6ba7ceb8e1ec51f3107849c4ee537f733203b7bcb98c45bf30728fa"} Feb 19 06:28:56 crc kubenswrapper[5012]: I0219 06:28:56.690085 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxf2x" event={"ID":"4d86775d-0772-4adf-9ed9-c7b3016d97e7","Type":"ContainerStarted","Data":"83897394254a09c8d3f5ccb409151529e06a1388aec8850dc52a000891f989d3"} Feb 19 06:28:56 crc kubenswrapper[5012]: I0219 06:28:56.718765 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pxf2x" podStartSLOduration=3.238273465 podStartE2EDuration="9.718748689s" podCreationTimestamp="2026-02-19 06:28:47 +0000 UTC" firstStartedPulling="2026-02-19 06:28:49.604630471 +0000 UTC m=+3825.637953070" lastFinishedPulling="2026-02-19 06:28:56.085105715 +0000 UTC m=+3832.118428294" observedRunningTime="2026-02-19 06:28:56.718441441 +0000 UTC m=+3832.751764050" watchObservedRunningTime="2026-02-19 06:28:56.718748689 +0000 UTC m=+3832.752071268" Feb 19 06:28:57 crc kubenswrapper[5012]: I0219 06:28:57.820126 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:57 crc kubenswrapper[5012]: I0219 06:28:57.820469 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:58 crc kubenswrapper[5012]: I0219 06:28:58.909537 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pxf2x" podUID="4d86775d-0772-4adf-9ed9-c7b3016d97e7" containerName="registry-server" probeResult="failure" output=< Feb 19 06:28:58 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 06:28:58 crc kubenswrapper[5012]: > Feb 19 06:29:05 crc kubenswrapper[5012]: I0219 06:29:05.703384 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:29:05 crc kubenswrapper[5012]: E0219 06:29:05.704228 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:29:07 crc kubenswrapper[5012]: I0219 06:29:07.889888 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:29:07 crc kubenswrapper[5012]: I0219 06:29:07.951331 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.062107 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxf2x"] Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.154955 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bj5sc"] Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.155327 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bj5sc" podUID="b03ab861-19bb-4215-9b19-990a14b35367" containerName="registry-server" containerID="cri-o://ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80" gracePeriod=2 Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.665354 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bj5sc" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.794356 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-utilities\") pod \"b03ab861-19bb-4215-9b19-990a14b35367\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.794492 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqbpk\" (UniqueName: \"kubernetes.io/projected/b03ab861-19bb-4215-9b19-990a14b35367-kube-api-access-lqbpk\") pod \"b03ab861-19bb-4215-9b19-990a14b35367\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.794564 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-catalog-content\") pod \"b03ab861-19bb-4215-9b19-990a14b35367\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.795163 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-utilities" (OuterVolumeSpecName: "utilities") pod "b03ab861-19bb-4215-9b19-990a14b35367" (UID: "b03ab861-19bb-4215-9b19-990a14b35367"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.808587 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b03ab861-19bb-4215-9b19-990a14b35367-kube-api-access-lqbpk" (OuterVolumeSpecName: "kube-api-access-lqbpk") pod "b03ab861-19bb-4215-9b19-990a14b35367" (UID: "b03ab861-19bb-4215-9b19-990a14b35367"). InnerVolumeSpecName "kube-api-access-lqbpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.829081 5012 generic.go:334] "Generic (PLEG): container finished" podID="b03ab861-19bb-4215-9b19-990a14b35367" containerID="ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80" exitCode=0 Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.829149 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bj5sc" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.829148 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj5sc" event={"ID":"b03ab861-19bb-4215-9b19-990a14b35367","Type":"ContainerDied","Data":"ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80"} Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.829450 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj5sc" event={"ID":"b03ab861-19bb-4215-9b19-990a14b35367","Type":"ContainerDied","Data":"b3f8ca73c66c4fd97d0f19be0a24c8b8a95a41c1d3401dfb594c1ddc1a916e29"} Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.829472 5012 scope.go:117] "RemoveContainer" containerID="ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.868248 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b03ab861-19bb-4215-9b19-990a14b35367" (UID: "b03ab861-19bb-4215-9b19-990a14b35367"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.871651 5012 scope.go:117] "RemoveContainer" containerID="8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.888949 5012 scope.go:117] "RemoveContainer" containerID="8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.896650 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.896669 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.896695 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqbpk\" (UniqueName: \"kubernetes.io/projected/b03ab861-19bb-4215-9b19-990a14b35367-kube-api-access-lqbpk\") on node \"crc\" DevicePath \"\"" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.934825 5012 scope.go:117] "RemoveContainer" containerID="ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80" Feb 19 06:29:08 crc kubenswrapper[5012]: E0219 06:29:08.935215 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80\": container with ID starting with ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80 not found: ID does not exist" containerID="ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.935247 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80"} err="failed to get container status \"ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80\": rpc error: code = NotFound desc = could not find container \"ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80\": container with ID starting with ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80 not found: ID does not exist" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.935268 5012 scope.go:117] "RemoveContainer" containerID="8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f" Feb 19 06:29:08 crc kubenswrapper[5012]: E0219 06:29:08.935472 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f\": container with ID starting with 8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f not found: ID does not exist" containerID="8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.935496 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f"} err="failed to get container status \"8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f\": rpc error: code = NotFound desc = could not find container \"8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f\": container with ID starting with 8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f not found: ID does not exist" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.935508 5012 scope.go:117] "RemoveContainer" containerID="8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b" Feb 19 06:29:08 crc kubenswrapper[5012]: E0219 06:29:08.935688 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b\": container with ID starting with 8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b not found: ID does not exist" containerID="8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.935708 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b"} err="failed to get container status \"8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b\": rpc error: code = NotFound desc = could not find container \"8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b\": container with ID starting with 8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b not found: ID does not exist" Feb 19 06:29:09 crc kubenswrapper[5012]: I0219 06:29:09.185259 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bj5sc"] Feb 19 06:29:09 crc kubenswrapper[5012]: I0219 06:29:09.196036 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bj5sc"] Feb 19 06:29:10 crc kubenswrapper[5012]: I0219 06:29:10.723996 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b03ab861-19bb-4215-9b19-990a14b35367" path="/var/lib/kubelet/pods/b03ab861-19bb-4215-9b19-990a14b35367/volumes" Feb 19 06:29:19 crc kubenswrapper[5012]: I0219 06:29:19.703974 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:29:19 crc kubenswrapper[5012]: E0219 06:29:19.705146 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:29:32 crc kubenswrapper[5012]: I0219 06:29:32.703183 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:29:32 crc kubenswrapper[5012]: E0219 06:29:32.704066 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:29:44 crc kubenswrapper[5012]: I0219 06:29:44.716371 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:29:45 crc kubenswrapper[5012]: I0219 06:29:45.262095 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"a8624682a1b0fe5d91d96534d39753294d14b7d998fb6da563d9b8a2dee2a6b7"} Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.206388 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674"] Feb 19 06:30:00 crc kubenswrapper[5012]: E0219 06:30:00.207731 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03ab861-19bb-4215-9b19-990a14b35367" containerName="registry-server" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.207763 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03ab861-19bb-4215-9b19-990a14b35367" containerName="registry-server" Feb 19 06:30:00 crc kubenswrapper[5012]: E0219 06:30:00.207777 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03ab861-19bb-4215-9b19-990a14b35367" containerName="extract-utilities" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.207788 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03ab861-19bb-4215-9b19-990a14b35367" containerName="extract-utilities" Feb 19 06:30:00 crc kubenswrapper[5012]: E0219 06:30:00.207812 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03ab861-19bb-4215-9b19-990a14b35367" containerName="extract-content" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.207827 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03ab861-19bb-4215-9b19-990a14b35367" containerName="extract-content" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.208118 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03ab861-19bb-4215-9b19-990a14b35367" containerName="registry-server" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.209095 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.214190 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.221919 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674"] Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.222045 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.388556 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f4472c9-3299-45cf-95d4-af341606fb58-config-volume\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.389536 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mch7d\" (UniqueName: \"kubernetes.io/projected/4f4472c9-3299-45cf-95d4-af341606fb58-kube-api-access-mch7d\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.389886 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f4472c9-3299-45cf-95d4-af341606fb58-secret-volume\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.491525 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f4472c9-3299-45cf-95d4-af341606fb58-config-volume\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.491605 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mch7d\" (UniqueName: \"kubernetes.io/projected/4f4472c9-3299-45cf-95d4-af341606fb58-kube-api-access-mch7d\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.491686 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f4472c9-3299-45cf-95d4-af341606fb58-secret-volume\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.493186 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f4472c9-3299-45cf-95d4-af341606fb58-config-volume\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.504862 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f4472c9-3299-45cf-95d4-af341606fb58-secret-volume\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.523926 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mch7d\" (UniqueName: \"kubernetes.io/projected/4f4472c9-3299-45cf-95d4-af341606fb58-kube-api-access-mch7d\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.531051 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:01 crc kubenswrapper[5012]: I0219 06:30:01.096117 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674"] Feb 19 06:30:01 crc kubenswrapper[5012]: W0219 06:30:01.099334 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f4472c9_3299_45cf_95d4_af341606fb58.slice/crio-5cf05c421ed982916fe3ef7f308c4cebdd9734238e78448b33292010c065a9e2 WatchSource:0}: Error finding container 5cf05c421ed982916fe3ef7f308c4cebdd9734238e78448b33292010c065a9e2: Status 404 returned error can't find the container with id 5cf05c421ed982916fe3ef7f308c4cebdd9734238e78448b33292010c065a9e2 Feb 19 06:30:01 crc kubenswrapper[5012]: I0219 06:30:01.422382 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" event={"ID":"4f4472c9-3299-45cf-95d4-af341606fb58","Type":"ContainerStarted","Data":"c006cdccaac79fb8cfe4dc746d53b358029e4ebb06f6b1804024382f3aa49800"} Feb 19 06:30:01 crc kubenswrapper[5012]: I0219 06:30:01.422860 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" event={"ID":"4f4472c9-3299-45cf-95d4-af341606fb58","Type":"ContainerStarted","Data":"5cf05c421ed982916fe3ef7f308c4cebdd9734238e78448b33292010c065a9e2"} Feb 19 06:30:01 crc kubenswrapper[5012]: I0219 06:30:01.447561 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" podStartSLOduration=1.447535971 podStartE2EDuration="1.447535971s" podCreationTimestamp="2026-02-19 06:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 06:30:01.440302744 +0000 UTC m=+3897.473625313" watchObservedRunningTime="2026-02-19 06:30:01.447535971 +0000 UTC m=+3897.480858550" Feb 19 06:30:02 crc kubenswrapper[5012]: I0219 06:30:02.445206 5012 generic.go:334] "Generic (PLEG): container finished" podID="4f4472c9-3299-45cf-95d4-af341606fb58" containerID="c006cdccaac79fb8cfe4dc746d53b358029e4ebb06f6b1804024382f3aa49800" exitCode=0 Feb 19 06:30:02 crc kubenswrapper[5012]: I0219 06:30:02.445373 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" event={"ID":"4f4472c9-3299-45cf-95d4-af341606fb58","Type":"ContainerDied","Data":"c006cdccaac79fb8cfe4dc746d53b358029e4ebb06f6b1804024382f3aa49800"} Feb 19 06:30:03 crc kubenswrapper[5012]: I0219 06:30:03.943558 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.093170 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f4472c9-3299-45cf-95d4-af341606fb58-secret-volume\") pod \"4f4472c9-3299-45cf-95d4-af341606fb58\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.093242 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f4472c9-3299-45cf-95d4-af341606fb58-config-volume\") pod \"4f4472c9-3299-45cf-95d4-af341606fb58\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.093275 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mch7d\" (UniqueName: \"kubernetes.io/projected/4f4472c9-3299-45cf-95d4-af341606fb58-kube-api-access-mch7d\") pod \"4f4472c9-3299-45cf-95d4-af341606fb58\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.094670 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f4472c9-3299-45cf-95d4-af341606fb58-config-volume" (OuterVolumeSpecName: "config-volume") pod "4f4472c9-3299-45cf-95d4-af341606fb58" (UID: "4f4472c9-3299-45cf-95d4-af341606fb58"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.100040 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4472c9-3299-45cf-95d4-af341606fb58-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4f4472c9-3299-45cf-95d4-af341606fb58" (UID: "4f4472c9-3299-45cf-95d4-af341606fb58"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.102519 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4472c9-3299-45cf-95d4-af341606fb58-kube-api-access-mch7d" (OuterVolumeSpecName: "kube-api-access-mch7d") pod "4f4472c9-3299-45cf-95d4-af341606fb58" (UID: "4f4472c9-3299-45cf-95d4-af341606fb58"). InnerVolumeSpecName "kube-api-access-mch7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.196670 5012 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f4472c9-3299-45cf-95d4-af341606fb58-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.196711 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f4472c9-3299-45cf-95d4-af341606fb58-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.196723 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mch7d\" (UniqueName: \"kubernetes.io/projected/4f4472c9-3299-45cf-95d4-af341606fb58-kube-api-access-mch7d\") on node \"crc\" DevicePath \"\"" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.475289 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" event={"ID":"4f4472c9-3299-45cf-95d4-af341606fb58","Type":"ContainerDied","Data":"5cf05c421ed982916fe3ef7f308c4cebdd9734238e78448b33292010c065a9e2"} Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.475341 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cf05c421ed982916fe3ef7f308c4cebdd9734238e78448b33292010c065a9e2" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.475456 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.550686 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v"] Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.559956 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v"] Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.729947 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46070367-1765-4a70-b997-58b87ee1fbf1" path="/var/lib/kubelet/pods/46070367-1765-4a70-b997-58b87ee1fbf1/volumes" Feb 19 06:30:38 crc kubenswrapper[5012]: I0219 06:30:38.926638 5012 scope.go:117] "RemoveContainer" containerID="ef8d233d5ce4a4673c65e084ba6deb20a57df07604ba44e351882efa60733381" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.316081 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lllnk"] Feb 19 06:30:53 crc kubenswrapper[5012]: E0219 06:30:53.317392 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f4472c9-3299-45cf-95d4-af341606fb58" containerName="collect-profiles" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.317408 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f4472c9-3299-45cf-95d4-af341606fb58" containerName="collect-profiles" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.317741 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f4472c9-3299-45cf-95d4-af341606fb58" containerName="collect-profiles" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.319717 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.333489 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lllnk"] Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.448731 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzvz6\" (UniqueName: \"kubernetes.io/projected/f3768d99-6ea2-494b-bce6-a469804e6f6f-kube-api-access-dzvz6\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.448918 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-utilities\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.448966 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-catalog-content\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.550800 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzvz6\" (UniqueName: \"kubernetes.io/projected/f3768d99-6ea2-494b-bce6-a469804e6f6f-kube-api-access-dzvz6\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.550947 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-utilities\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.550981 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-catalog-content\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.551542 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-utilities\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.551636 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-catalog-content\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.572211 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzvz6\" (UniqueName: \"kubernetes.io/projected/f3768d99-6ea2-494b-bce6-a469804e6f6f-kube-api-access-dzvz6\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.651576 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:54 crc kubenswrapper[5012]: I0219 06:30:54.223213 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lllnk"] Feb 19 06:30:54 crc kubenswrapper[5012]: W0219 06:30:54.237809 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3768d99_6ea2_494b_bce6_a469804e6f6f.slice/crio-a06705793d2a359ee88ce0ad748070766679d29c188e121f2bda9010943558e3 WatchSource:0}: Error finding container a06705793d2a359ee88ce0ad748070766679d29c188e121f2bda9010943558e3: Status 404 returned error can't find the container with id a06705793d2a359ee88ce0ad748070766679d29c188e121f2bda9010943558e3 Feb 19 06:30:55 crc kubenswrapper[5012]: I0219 06:30:55.070089 5012 generic.go:334] "Generic (PLEG): container finished" podID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerID="1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0" exitCode=0 Feb 19 06:30:55 crc kubenswrapper[5012]: I0219 06:30:55.070469 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lllnk" event={"ID":"f3768d99-6ea2-494b-bce6-a469804e6f6f","Type":"ContainerDied","Data":"1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0"} Feb 19 06:30:55 crc kubenswrapper[5012]: I0219 06:30:55.070516 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lllnk" event={"ID":"f3768d99-6ea2-494b-bce6-a469804e6f6f","Type":"ContainerStarted","Data":"a06705793d2a359ee88ce0ad748070766679d29c188e121f2bda9010943558e3"} Feb 19 06:30:57 crc kubenswrapper[5012]: I0219 06:30:57.100109 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lllnk" event={"ID":"f3768d99-6ea2-494b-bce6-a469804e6f6f","Type":"ContainerStarted","Data":"9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33"} Feb 19 06:31:01 crc kubenswrapper[5012]: I0219 06:31:01.141905 5012 generic.go:334] "Generic (PLEG): container finished" podID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerID="9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33" exitCode=0 Feb 19 06:31:01 crc kubenswrapper[5012]: I0219 06:31:01.142005 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lllnk" event={"ID":"f3768d99-6ea2-494b-bce6-a469804e6f6f","Type":"ContainerDied","Data":"9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33"} Feb 19 06:31:02 crc kubenswrapper[5012]: I0219 06:31:02.153181 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lllnk" event={"ID":"f3768d99-6ea2-494b-bce6-a469804e6f6f","Type":"ContainerStarted","Data":"e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91"} Feb 19 06:31:02 crc kubenswrapper[5012]: I0219 06:31:02.171838 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lllnk" podStartSLOduration=2.529620654 podStartE2EDuration="9.171811505s" podCreationTimestamp="2026-02-19 06:30:53 +0000 UTC" firstStartedPulling="2026-02-19 06:30:55.121189256 +0000 UTC m=+3951.154511825" lastFinishedPulling="2026-02-19 06:31:01.763380067 +0000 UTC m=+3957.796702676" observedRunningTime="2026-02-19 06:31:02.169237112 +0000 UTC m=+3958.202559751" watchObservedRunningTime="2026-02-19 06:31:02.171811505 +0000 UTC m=+3958.205134114" Feb 19 06:31:03 crc kubenswrapper[5012]: I0219 06:31:03.653275 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:31:03 crc kubenswrapper[5012]: I0219 06:31:03.655024 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:31:04 crc kubenswrapper[5012]: I0219 06:31:04.729792 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lllnk" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="registry-server" probeResult="failure" output=< Feb 19 06:31:04 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 06:31:04 crc kubenswrapper[5012]: > Feb 19 06:31:13 crc kubenswrapper[5012]: I0219 06:31:13.735474 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:31:13 crc kubenswrapper[5012]: I0219 06:31:13.807073 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:31:13 crc kubenswrapper[5012]: I0219 06:31:13.976391 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lllnk"] Feb 19 06:31:15 crc kubenswrapper[5012]: I0219 06:31:15.296471 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lllnk" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="registry-server" containerID="cri-o://e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91" gracePeriod=2 Feb 19 06:31:15 crc kubenswrapper[5012]: I0219 06:31:15.815400 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:31:15 crc kubenswrapper[5012]: I0219 06:31:15.911862 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-catalog-content\") pod \"f3768d99-6ea2-494b-bce6-a469804e6f6f\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " Feb 19 06:31:15 crc kubenswrapper[5012]: I0219 06:31:15.912060 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-utilities\") pod \"f3768d99-6ea2-494b-bce6-a469804e6f6f\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " Feb 19 06:31:15 crc kubenswrapper[5012]: I0219 06:31:15.912221 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzvz6\" (UniqueName: \"kubernetes.io/projected/f3768d99-6ea2-494b-bce6-a469804e6f6f-kube-api-access-dzvz6\") pod \"f3768d99-6ea2-494b-bce6-a469804e6f6f\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " Feb 19 06:31:15 crc kubenswrapper[5012]: I0219 06:31:15.913801 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-utilities" (OuterVolumeSpecName: "utilities") pod "f3768d99-6ea2-494b-bce6-a469804e6f6f" (UID: "f3768d99-6ea2-494b-bce6-a469804e6f6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:31:15 crc kubenswrapper[5012]: I0219 06:31:15.919333 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3768d99-6ea2-494b-bce6-a469804e6f6f-kube-api-access-dzvz6" (OuterVolumeSpecName: "kube-api-access-dzvz6") pod "f3768d99-6ea2-494b-bce6-a469804e6f6f" (UID: "f3768d99-6ea2-494b-bce6-a469804e6f6f"). InnerVolumeSpecName "kube-api-access-dzvz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.014513 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.014542 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzvz6\" (UniqueName: \"kubernetes.io/projected/f3768d99-6ea2-494b-bce6-a469804e6f6f-kube-api-access-dzvz6\") on node \"crc\" DevicePath \"\"" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.067807 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3768d99-6ea2-494b-bce6-a469804e6f6f" (UID: "f3768d99-6ea2-494b-bce6-a469804e6f6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.116595 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.309404 5012 generic.go:334] "Generic (PLEG): container finished" podID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerID="e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91" exitCode=0 Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.309456 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lllnk" event={"ID":"f3768d99-6ea2-494b-bce6-a469804e6f6f","Type":"ContainerDied","Data":"e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91"} Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.309488 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lllnk" event={"ID":"f3768d99-6ea2-494b-bce6-a469804e6f6f","Type":"ContainerDied","Data":"a06705793d2a359ee88ce0ad748070766679d29c188e121f2bda9010943558e3"} Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.309526 5012 scope.go:117] "RemoveContainer" containerID="e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.309527 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.400629 5012 scope.go:117] "RemoveContainer" containerID="9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.403919 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lllnk"] Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.413207 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lllnk"] Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.432228 5012 scope.go:117] "RemoveContainer" containerID="1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.529693 5012 scope.go:117] "RemoveContainer" containerID="e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91" Feb 19 06:31:16 crc kubenswrapper[5012]: E0219 06:31:16.530134 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91\": container with ID starting with e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91 not found: ID does not exist" containerID="e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.530180 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91"} err="failed to get container status \"e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91\": rpc error: code = NotFound desc = could not find container \"e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91\": container with ID starting with e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91 not found: ID does not exist" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.530202 5012 scope.go:117] "RemoveContainer" containerID="9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33" Feb 19 06:31:16 crc kubenswrapper[5012]: E0219 06:31:16.530749 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33\": container with ID starting with 9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33 not found: ID does not exist" containerID="9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.530809 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33"} err="failed to get container status \"9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33\": rpc error: code = NotFound desc = could not find container \"9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33\": container with ID starting with 9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33 not found: ID does not exist" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.530836 5012 scope.go:117] "RemoveContainer" containerID="1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0" Feb 19 06:31:16 crc kubenswrapper[5012]: E0219 06:31:16.531164 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0\": container with ID starting with 1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0 not found: ID does not exist" containerID="1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.531206 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0"} err="failed to get container status \"1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0\": rpc error: code = NotFound desc = could not find container \"1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0\": container with ID starting with 1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0 not found: ID does not exist" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.713551 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" path="/var/lib/kubelet/pods/f3768d99-6ea2-494b-bce6-a469804e6f6f/volumes" Feb 19 06:31:44 crc kubenswrapper[5012]: I0219 06:31:44.431003 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:31:44 crc kubenswrapper[5012]: I0219 06:31:44.431694 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:32:14 crc kubenswrapper[5012]: I0219 06:32:14.430902 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:32:14 crc kubenswrapper[5012]: I0219 06:32:14.431706 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:32:44 crc kubenswrapper[5012]: I0219 06:32:44.431241 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:32:44 crc kubenswrapper[5012]: I0219 06:32:44.432001 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:32:44 crc kubenswrapper[5012]: I0219 06:32:44.432073 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:32:44 crc kubenswrapper[5012]: I0219 06:32:44.433253 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a8624682a1b0fe5d91d96534d39753294d14b7d998fb6da563d9b8a2dee2a6b7"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:32:44 crc kubenswrapper[5012]: I0219 06:32:44.433383 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://a8624682a1b0fe5d91d96534d39753294d14b7d998fb6da563d9b8a2dee2a6b7" gracePeriod=600 Feb 19 06:32:45 crc kubenswrapper[5012]: I0219 06:32:45.351087 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="a8624682a1b0fe5d91d96534d39753294d14b7d998fb6da563d9b8a2dee2a6b7" exitCode=0 Feb 19 06:32:45 crc kubenswrapper[5012]: I0219 06:32:45.351186 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"a8624682a1b0fe5d91d96534d39753294d14b7d998fb6da563d9b8a2dee2a6b7"} Feb 19 06:32:45 crc kubenswrapper[5012]: I0219 06:32:45.351659 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0"} Feb 19 06:32:45 crc kubenswrapper[5012]: I0219 06:32:45.351688 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:34:44 crc kubenswrapper[5012]: I0219 06:34:44.431067 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:34:44 crc kubenswrapper[5012]: I0219 06:34:44.432042 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.333886 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x5n5b"] Feb 19 06:34:52 crc kubenswrapper[5012]: E0219 06:34:52.335502 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="registry-server" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.335528 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="registry-server" Feb 19 06:34:52 crc kubenswrapper[5012]: E0219 06:34:52.335555 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="extract-content" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.335569 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="extract-content" Feb 19 06:34:52 crc kubenswrapper[5012]: E0219 06:34:52.335622 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="extract-utilities" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.335640 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="extract-utilities" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.336021 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="registry-server" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.338730 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.362652 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x5n5b"] Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.420319 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-utilities\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.420421 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-catalog-content\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.420608 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g8mb\" (UniqueName: \"kubernetes.io/projected/dda56b59-9803-4c72-8018-cf68bb4c543c-kube-api-access-2g8mb\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.522873 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-utilities\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.522947 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-catalog-content\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.523075 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g8mb\" (UniqueName: \"kubernetes.io/projected/dda56b59-9803-4c72-8018-cf68bb4c543c-kube-api-access-2g8mb\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.523577 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-utilities\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.523658 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-catalog-content\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.557227 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g8mb\" (UniqueName: \"kubernetes.io/projected/dda56b59-9803-4c72-8018-cf68bb4c543c-kube-api-access-2g8mb\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.696371 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:53 crc kubenswrapper[5012]: I0219 06:34:53.220630 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x5n5b"] Feb 19 06:34:54 crc kubenswrapper[5012]: I0219 06:34:54.005106 5012 generic.go:334] "Generic (PLEG): container finished" podID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerID="318b1dbceebe4140b4f2722f73afebdf8799017f86633f49bf31c3697900743c" exitCode=0 Feb 19 06:34:54 crc kubenswrapper[5012]: I0219 06:34:54.005214 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5n5b" event={"ID":"dda56b59-9803-4c72-8018-cf68bb4c543c","Type":"ContainerDied","Data":"318b1dbceebe4140b4f2722f73afebdf8799017f86633f49bf31c3697900743c"} Feb 19 06:34:54 crc kubenswrapper[5012]: I0219 06:34:54.005641 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5n5b" event={"ID":"dda56b59-9803-4c72-8018-cf68bb4c543c","Type":"ContainerStarted","Data":"d7a534106ffeb4a71bd58d47b75e229b45f11ce10463c0bc23fa5f8d7f17d1eb"} Feb 19 06:34:54 crc kubenswrapper[5012]: I0219 06:34:54.009535 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:34:56 crc kubenswrapper[5012]: I0219 06:34:56.036467 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5n5b" event={"ID":"dda56b59-9803-4c72-8018-cf68bb4c543c","Type":"ContainerStarted","Data":"f94dfe8658183190b4b7f97cec9a4d23cae32448a7a614b815c534822bd6b108"} Feb 19 06:34:57 crc kubenswrapper[5012]: I0219 06:34:57.052757 5012 generic.go:334] "Generic (PLEG): container finished" podID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerID="f94dfe8658183190b4b7f97cec9a4d23cae32448a7a614b815c534822bd6b108" exitCode=0 Feb 19 06:34:57 crc kubenswrapper[5012]: I0219 06:34:57.052864 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5n5b" event={"ID":"dda56b59-9803-4c72-8018-cf68bb4c543c","Type":"ContainerDied","Data":"f94dfe8658183190b4b7f97cec9a4d23cae32448a7a614b815c534822bd6b108"} Feb 19 06:34:58 crc kubenswrapper[5012]: I0219 06:34:58.068616 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5n5b" event={"ID":"dda56b59-9803-4c72-8018-cf68bb4c543c","Type":"ContainerStarted","Data":"67e74658237f4f28b8c38e733efc54f02924544aaa2ad7c1de72e8a40944c0a7"} Feb 19 06:34:58 crc kubenswrapper[5012]: I0219 06:34:58.098112 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x5n5b" podStartSLOduration=2.643510505 podStartE2EDuration="6.098088373s" podCreationTimestamp="2026-02-19 06:34:52 +0000 UTC" firstStartedPulling="2026-02-19 06:34:54.008109359 +0000 UTC m=+4190.041431968" lastFinishedPulling="2026-02-19 06:34:57.462687227 +0000 UTC m=+4193.496009836" observedRunningTime="2026-02-19 06:34:58.094765762 +0000 UTC m=+4194.128088351" watchObservedRunningTime="2026-02-19 06:34:58.098088373 +0000 UTC m=+4194.131410942" Feb 19 06:35:02 crc kubenswrapper[5012]: I0219 06:35:02.697486 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:35:02 crc kubenswrapper[5012]: I0219 06:35:02.698163 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:35:03 crc kubenswrapper[5012]: I0219 06:35:03.400057 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:35:03 crc kubenswrapper[5012]: I0219 06:35:03.495476 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:35:03 crc kubenswrapper[5012]: I0219 06:35:03.657635 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x5n5b"] Feb 19 06:35:05 crc kubenswrapper[5012]: I0219 06:35:05.152895 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x5n5b" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerName="registry-server" containerID="cri-o://67e74658237f4f28b8c38e733efc54f02924544aaa2ad7c1de72e8a40944c0a7" gracePeriod=2 Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.191624 5012 generic.go:334] "Generic (PLEG): container finished" podID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerID="67e74658237f4f28b8c38e733efc54f02924544aaa2ad7c1de72e8a40944c0a7" exitCode=0 Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.191720 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5n5b" event={"ID":"dda56b59-9803-4c72-8018-cf68bb4c543c","Type":"ContainerDied","Data":"67e74658237f4f28b8c38e733efc54f02924544aaa2ad7c1de72e8a40944c0a7"} Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.696968 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.837851 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g8mb\" (UniqueName: \"kubernetes.io/projected/dda56b59-9803-4c72-8018-cf68bb4c543c-kube-api-access-2g8mb\") pod \"dda56b59-9803-4c72-8018-cf68bb4c543c\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.837964 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-utilities\") pod \"dda56b59-9803-4c72-8018-cf68bb4c543c\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.838059 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-catalog-content\") pod \"dda56b59-9803-4c72-8018-cf68bb4c543c\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.842107 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-utilities" (OuterVolumeSpecName: "utilities") pod "dda56b59-9803-4c72-8018-cf68bb4c543c" (UID: "dda56b59-9803-4c72-8018-cf68bb4c543c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.848139 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dda56b59-9803-4c72-8018-cf68bb4c543c-kube-api-access-2g8mb" (OuterVolumeSpecName: "kube-api-access-2g8mb") pod "dda56b59-9803-4c72-8018-cf68bb4c543c" (UID: "dda56b59-9803-4c72-8018-cf68bb4c543c"). InnerVolumeSpecName "kube-api-access-2g8mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.916419 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dda56b59-9803-4c72-8018-cf68bb4c543c" (UID: "dda56b59-9803-4c72-8018-cf68bb4c543c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.940852 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g8mb\" (UniqueName: \"kubernetes.io/projected/dda56b59-9803-4c72-8018-cf68bb4c543c-kube-api-access-2g8mb\") on node \"crc\" DevicePath \"\"" Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.940883 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.940899 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:35:08 crc kubenswrapper[5012]: I0219 06:35:08.209438 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5n5b" event={"ID":"dda56b59-9803-4c72-8018-cf68bb4c543c","Type":"ContainerDied","Data":"d7a534106ffeb4a71bd58d47b75e229b45f11ce10463c0bc23fa5f8d7f17d1eb"} Feb 19 06:35:08 crc kubenswrapper[5012]: I0219 06:35:08.209527 5012 scope.go:117] "RemoveContainer" containerID="67e74658237f4f28b8c38e733efc54f02924544aaa2ad7c1de72e8a40944c0a7" Feb 19 06:35:08 crc kubenswrapper[5012]: I0219 06:35:08.209566 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:35:08 crc kubenswrapper[5012]: I0219 06:35:08.247842 5012 scope.go:117] "RemoveContainer" containerID="f94dfe8658183190b4b7f97cec9a4d23cae32448a7a614b815c534822bd6b108" Feb 19 06:35:08 crc kubenswrapper[5012]: I0219 06:35:08.277490 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x5n5b"] Feb 19 06:35:08 crc kubenswrapper[5012]: I0219 06:35:08.296043 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x5n5b"] Feb 19 06:35:08 crc kubenswrapper[5012]: I0219 06:35:08.302207 5012 scope.go:117] "RemoveContainer" containerID="318b1dbceebe4140b4f2722f73afebdf8799017f86633f49bf31c3697900743c" Feb 19 06:35:08 crc kubenswrapper[5012]: I0219 06:35:08.731003 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" path="/var/lib/kubelet/pods/dda56b59-9803-4c72-8018-cf68bb4c543c/volumes" Feb 19 06:35:14 crc kubenswrapper[5012]: I0219 06:35:14.431050 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:35:14 crc kubenswrapper[5012]: I0219 06:35:14.431743 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:35:44 crc kubenswrapper[5012]: I0219 06:35:44.431286 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:35:44 crc kubenswrapper[5012]: I0219 06:35:44.432044 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:35:44 crc kubenswrapper[5012]: I0219 06:35:44.432120 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:35:44 crc kubenswrapper[5012]: I0219 06:35:44.433350 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:35:44 crc kubenswrapper[5012]: I0219 06:35:44.433456 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" gracePeriod=600 Feb 19 06:35:44 crc kubenswrapper[5012]: I0219 06:35:44.603645 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" exitCode=0 Feb 19 06:35:44 crc kubenswrapper[5012]: I0219 06:35:44.603854 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0"} Feb 19 06:35:44 crc kubenswrapper[5012]: I0219 06:35:44.604108 5012 scope.go:117] "RemoveContainer" containerID="a8624682a1b0fe5d91d96534d39753294d14b7d998fb6da563d9b8a2dee2a6b7" Feb 19 06:35:45 crc kubenswrapper[5012]: E0219 06:35:45.134575 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:35:45 crc kubenswrapper[5012]: I0219 06:35:45.615233 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:35:45 crc kubenswrapper[5012]: E0219 06:35:45.615918 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:36:00 crc kubenswrapper[5012]: I0219 06:36:00.704509 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:36:00 crc kubenswrapper[5012]: E0219 06:36:00.705607 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:36:11 crc kubenswrapper[5012]: I0219 06:36:11.703236 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:36:11 crc kubenswrapper[5012]: E0219 06:36:11.704373 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:36:22 crc kubenswrapper[5012]: I0219 06:36:22.704802 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:36:22 crc kubenswrapper[5012]: E0219 06:36:22.705813 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:36:34 crc kubenswrapper[5012]: I0219 06:36:34.711445 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:36:34 crc kubenswrapper[5012]: E0219 06:36:34.712427 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:36:49 crc kubenswrapper[5012]: I0219 06:36:49.703771 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:36:49 crc kubenswrapper[5012]: E0219 06:36:49.704997 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:37:00 crc kubenswrapper[5012]: I0219 06:37:00.704185 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:37:00 crc kubenswrapper[5012]: E0219 06:37:00.705425 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:37:12 crc kubenswrapper[5012]: I0219 06:37:12.703876 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:37:12 crc kubenswrapper[5012]: E0219 06:37:12.704754 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:37:27 crc kubenswrapper[5012]: I0219 06:37:27.703621 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:37:27 crc kubenswrapper[5012]: E0219 06:37:27.704560 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:37:39 crc kubenswrapper[5012]: I0219 06:37:39.705107 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:37:39 crc kubenswrapper[5012]: E0219 06:37:39.706164 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:37:52 crc kubenswrapper[5012]: I0219 06:37:52.704356 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:37:52 crc kubenswrapper[5012]: E0219 06:37:52.705485 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:38:07 crc kubenswrapper[5012]: I0219 06:38:07.702834 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:38:07 crc kubenswrapper[5012]: E0219 06:38:07.703512 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:38:21 crc kubenswrapper[5012]: I0219 06:38:21.703112 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:38:21 crc kubenswrapper[5012]: E0219 06:38:21.703803 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:38:32 crc kubenswrapper[5012]: I0219 06:38:32.703361 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:38:32 crc kubenswrapper[5012]: E0219 06:38:32.705882 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:38:46 crc kubenswrapper[5012]: I0219 06:38:46.703002 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:38:46 crc kubenswrapper[5012]: E0219 06:38:46.704187 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:39:00 crc kubenswrapper[5012]: I0219 06:39:00.703390 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:39:00 crc kubenswrapper[5012]: E0219 06:39:00.704318 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:39:11 crc kubenswrapper[5012]: I0219 06:39:11.702952 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:39:11 crc kubenswrapper[5012]: E0219 06:39:11.704023 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:39:25 crc kubenswrapper[5012]: I0219 06:39:25.703772 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:39:25 crc kubenswrapper[5012]: E0219 06:39:25.704963 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:39:36 crc kubenswrapper[5012]: I0219 06:39:36.703627 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:39:36 crc kubenswrapper[5012]: E0219 06:39:36.704705 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:39:47 crc kubenswrapper[5012]: I0219 06:39:47.703653 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:39:47 crc kubenswrapper[5012]: E0219 06:39:47.705363 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:40:01 crc kubenswrapper[5012]: I0219 06:40:01.703399 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:40:01 crc kubenswrapper[5012]: E0219 06:40:01.704259 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.053736 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vlg7v"] Feb 19 06:40:02 crc kubenswrapper[5012]: E0219 06:40:02.054291 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerName="registry-server" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.054327 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerName="registry-server" Feb 19 06:40:02 crc kubenswrapper[5012]: E0219 06:40:02.054344 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerName="extract-content" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.054352 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerName="extract-content" Feb 19 06:40:02 crc kubenswrapper[5012]: E0219 06:40:02.054368 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerName="extract-utilities" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.054377 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerName="extract-utilities" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.054666 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerName="registry-server" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.056705 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.091547 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vlg7v"] Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.231649 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-catalog-content\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.231943 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-utilities\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.232021 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vvrx\" (UniqueName: \"kubernetes.io/projected/e5d3e688-640e-4cd4-9729-8405195032a3-kube-api-access-7vvrx\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.334395 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-catalog-content\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.334854 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-utilities\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.334901 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vvrx\" (UniqueName: \"kubernetes.io/projected/e5d3e688-640e-4cd4-9729-8405195032a3-kube-api-access-7vvrx\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.334938 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-catalog-content\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.335324 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-utilities\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.354922 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vvrx\" (UniqueName: \"kubernetes.io/projected/e5d3e688-640e-4cd4-9729-8405195032a3-kube-api-access-7vvrx\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.408894 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.907423 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vlg7v"] Feb 19 06:40:03 crc kubenswrapper[5012]: I0219 06:40:03.746694 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlg7v" event={"ID":"e5d3e688-640e-4cd4-9729-8405195032a3","Type":"ContainerStarted","Data":"7754906bff20c85446814ba1839e8c3dc453d223f63e48b12a1c0b3a943d78c1"} Feb 19 06:40:04 crc kubenswrapper[5012]: I0219 06:40:04.763296 5012 generic.go:334] "Generic (PLEG): container finished" podID="e5d3e688-640e-4cd4-9729-8405195032a3" containerID="edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb" exitCode=0 Feb 19 06:40:04 crc kubenswrapper[5012]: I0219 06:40:04.763391 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlg7v" event={"ID":"e5d3e688-640e-4cd4-9729-8405195032a3","Type":"ContainerDied","Data":"edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb"} Feb 19 06:40:04 crc kubenswrapper[5012]: I0219 06:40:04.766852 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:40:05 crc kubenswrapper[5012]: I0219 06:40:05.779664 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlg7v" event={"ID":"e5d3e688-640e-4cd4-9729-8405195032a3","Type":"ContainerStarted","Data":"d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251"} Feb 19 06:40:06 crc kubenswrapper[5012]: I0219 06:40:06.800220 5012 generic.go:334] "Generic (PLEG): container finished" podID="e5d3e688-640e-4cd4-9729-8405195032a3" containerID="d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251" exitCode=0 Feb 19 06:40:06 crc kubenswrapper[5012]: I0219 06:40:06.800291 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlg7v" event={"ID":"e5d3e688-640e-4cd4-9729-8405195032a3","Type":"ContainerDied","Data":"d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251"} Feb 19 06:40:07 crc kubenswrapper[5012]: I0219 06:40:07.818853 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlg7v" event={"ID":"e5d3e688-640e-4cd4-9729-8405195032a3","Type":"ContainerStarted","Data":"785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998"} Feb 19 06:40:07 crc kubenswrapper[5012]: I0219 06:40:07.857674 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vlg7v" podStartSLOduration=3.381970489 podStartE2EDuration="5.857651935s" podCreationTimestamp="2026-02-19 06:40:02 +0000 UTC" firstStartedPulling="2026-02-19 06:40:04.765940286 +0000 UTC m=+4500.799262895" lastFinishedPulling="2026-02-19 06:40:07.241621772 +0000 UTC m=+4503.274944341" observedRunningTime="2026-02-19 06:40:07.845795021 +0000 UTC m=+4503.879117620" watchObservedRunningTime="2026-02-19 06:40:07.857651935 +0000 UTC m=+4503.890974524" Feb 19 06:40:12 crc kubenswrapper[5012]: I0219 06:40:12.409681 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:12 crc kubenswrapper[5012]: I0219 06:40:12.410468 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:12 crc kubenswrapper[5012]: I0219 06:40:12.494474 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:12 crc kubenswrapper[5012]: I0219 06:40:12.934499 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:12 crc kubenswrapper[5012]: I0219 06:40:12.995479 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vlg7v"] Feb 19 06:40:14 crc kubenswrapper[5012]: I0219 06:40:14.908598 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vlg7v" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" containerName="registry-server" containerID="cri-o://785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998" gracePeriod=2 Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.410334 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.570929 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-utilities\") pod \"e5d3e688-640e-4cd4-9729-8405195032a3\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.571151 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vvrx\" (UniqueName: \"kubernetes.io/projected/e5d3e688-640e-4cd4-9729-8405195032a3-kube-api-access-7vvrx\") pod \"e5d3e688-640e-4cd4-9729-8405195032a3\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.571427 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-catalog-content\") pod \"e5d3e688-640e-4cd4-9729-8405195032a3\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.572906 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-utilities" (OuterVolumeSpecName: "utilities") pod "e5d3e688-640e-4cd4-9729-8405195032a3" (UID: "e5d3e688-640e-4cd4-9729-8405195032a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.581376 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d3e688-640e-4cd4-9729-8405195032a3-kube-api-access-7vvrx" (OuterVolumeSpecName: "kube-api-access-7vvrx") pod "e5d3e688-640e-4cd4-9729-8405195032a3" (UID: "e5d3e688-640e-4cd4-9729-8405195032a3"). InnerVolumeSpecName "kube-api-access-7vvrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.651295 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5d3e688-640e-4cd4-9729-8405195032a3" (UID: "e5d3e688-640e-4cd4-9729-8405195032a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.674065 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vvrx\" (UniqueName: \"kubernetes.io/projected/e5d3e688-640e-4cd4-9729-8405195032a3-kube-api-access-7vvrx\") on node \"crc\" DevicePath \"\"" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.674111 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.674130 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.924609 5012 generic.go:334] "Generic (PLEG): container finished" podID="e5d3e688-640e-4cd4-9729-8405195032a3" containerID="785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998" exitCode=0 Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.924746 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.924738 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlg7v" event={"ID":"e5d3e688-640e-4cd4-9729-8405195032a3","Type":"ContainerDied","Data":"785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998"} Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.925783 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlg7v" event={"ID":"e5d3e688-640e-4cd4-9729-8405195032a3","Type":"ContainerDied","Data":"7754906bff20c85446814ba1839e8c3dc453d223f63e48b12a1c0b3a943d78c1"} Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.925817 5012 scope.go:117] "RemoveContainer" containerID="785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.960526 5012 scope.go:117] "RemoveContainer" containerID="d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.995827 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vlg7v"] Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.004088 5012 scope.go:117] "RemoveContainer" containerID="edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb" Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.012382 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vlg7v"] Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.079390 5012 scope.go:117] "RemoveContainer" containerID="785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998" Feb 19 06:40:16 crc kubenswrapper[5012]: E0219 06:40:16.079849 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998\": container with ID starting with 785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998 not found: ID does not exist" containerID="785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998" Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.079908 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998"} err="failed to get container status \"785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998\": rpc error: code = NotFound desc = could not find container \"785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998\": container with ID starting with 785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998 not found: ID does not exist" Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.079944 5012 scope.go:117] "RemoveContainer" containerID="d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251" Feb 19 06:40:16 crc kubenswrapper[5012]: E0219 06:40:16.080621 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251\": container with ID starting with d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251 not found: ID does not exist" containerID="d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251" Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.080661 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251"} err="failed to get container status \"d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251\": rpc error: code = NotFound desc = could not find container \"d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251\": container with ID starting with d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251 not found: ID does not exist" Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.080690 5012 scope.go:117] "RemoveContainer" containerID="edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb" Feb 19 06:40:16 crc kubenswrapper[5012]: E0219 06:40:16.082818 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb\": container with ID starting with edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb not found: ID does not exist" containerID="edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb" Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.082851 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb"} err="failed to get container status \"edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb\": rpc error: code = NotFound desc = could not find container \"edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb\": container with ID starting with edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb not found: ID does not exist" Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.703740 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:40:16 crc kubenswrapper[5012]: E0219 06:40:16.704760 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.732500 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" path="/var/lib/kubelet/pods/e5d3e688-640e-4cd4-9729-8405195032a3/volumes" Feb 19 06:40:29 crc kubenswrapper[5012]: I0219 06:40:29.703236 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:40:29 crc kubenswrapper[5012]: E0219 06:40:29.704164 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:40:43 crc kubenswrapper[5012]: I0219 06:40:43.703243 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:40:43 crc kubenswrapper[5012]: E0219 06:40:43.704366 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:40:56 crc kubenswrapper[5012]: I0219 06:40:56.703533 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:40:57 crc kubenswrapper[5012]: I0219 06:40:57.622611 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"00e1065f77f9e7e865aaa5f4d131bac2bd7836a8c4264f89c263a864c8ce750f"} Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.413350 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tf4n4"] Feb 19 06:41:22 crc kubenswrapper[5012]: E0219 06:41:22.414438 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" containerName="registry-server" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.414456 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" containerName="registry-server" Feb 19 06:41:22 crc kubenswrapper[5012]: E0219 06:41:22.414489 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" containerName="extract-content" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.414497 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" containerName="extract-content" Feb 19 06:41:22 crc kubenswrapper[5012]: E0219 06:41:22.414535 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" containerName="extract-utilities" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.414543 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" containerName="extract-utilities" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.414794 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" containerName="registry-server" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.416545 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.454685 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tf4n4"] Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.499196 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-catalog-content\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.499584 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kld54\" (UniqueName: \"kubernetes.io/projected/6841b536-8c98-4aad-9989-b588d892ff31-kube-api-access-kld54\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.499639 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-utilities\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.602291 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kld54\" (UniqueName: \"kubernetes.io/projected/6841b536-8c98-4aad-9989-b588d892ff31-kube-api-access-kld54\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.602407 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-utilities\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.602499 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-catalog-content\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.603086 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-utilities\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.603086 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-catalog-content\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.628745 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kld54\" (UniqueName: \"kubernetes.io/projected/6841b536-8c98-4aad-9989-b588d892ff31-kube-api-access-kld54\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.746182 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:23 crc kubenswrapper[5012]: I0219 06:41:23.230714 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tf4n4"] Feb 19 06:41:23 crc kubenswrapper[5012]: I0219 06:41:23.909363 5012 generic.go:334] "Generic (PLEG): container finished" podID="6841b536-8c98-4aad-9989-b588d892ff31" containerID="d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166" exitCode=0 Feb 19 06:41:23 crc kubenswrapper[5012]: I0219 06:41:23.909451 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf4n4" event={"ID":"6841b536-8c98-4aad-9989-b588d892ff31","Type":"ContainerDied","Data":"d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166"} Feb 19 06:41:23 crc kubenswrapper[5012]: I0219 06:41:23.909632 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf4n4" event={"ID":"6841b536-8c98-4aad-9989-b588d892ff31","Type":"ContainerStarted","Data":"942c30e93ef58c6bb6ca3087e3a4abe40e69b68204d55971a374b9fa9499bfc6"} Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.616657 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-66cmz"] Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.621860 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.662136 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66cmz"] Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.747650 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-utilities\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.747732 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ghkp\" (UniqueName: \"kubernetes.io/projected/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-kube-api-access-2ghkp\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.748239 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-catalog-content\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.851357 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-catalog-content\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.851603 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-utilities\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.851733 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ghkp\" (UniqueName: \"kubernetes.io/projected/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-kube-api-access-2ghkp\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.852750 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-utilities\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.853543 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-catalog-content\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.879666 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ghkp\" (UniqueName: \"kubernetes.io/projected/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-kube-api-access-2ghkp\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.972468 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:25 crc kubenswrapper[5012]: I0219 06:41:25.506879 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66cmz"] Feb 19 06:41:25 crc kubenswrapper[5012]: I0219 06:41:25.931866 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf4n4" event={"ID":"6841b536-8c98-4aad-9989-b588d892ff31","Type":"ContainerStarted","Data":"baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438"} Feb 19 06:41:25 crc kubenswrapper[5012]: I0219 06:41:25.935557 5012 generic.go:334] "Generic (PLEG): container finished" podID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerID="a87d0a7fdd603f20783ea6ff56dc5dc304c40610e8dbadc8d1e3e50edb7634ba" exitCode=0 Feb 19 06:41:25 crc kubenswrapper[5012]: I0219 06:41:25.935607 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66cmz" event={"ID":"61ce3f5e-028c-4b7f-a71e-a92e3d856c23","Type":"ContainerDied","Data":"a87d0a7fdd603f20783ea6ff56dc5dc304c40610e8dbadc8d1e3e50edb7634ba"} Feb 19 06:41:25 crc kubenswrapper[5012]: I0219 06:41:25.935663 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66cmz" event={"ID":"61ce3f5e-028c-4b7f-a71e-a92e3d856c23","Type":"ContainerStarted","Data":"2624cc6ec470f8dc84d3c7b03def5d04d47434c9b3d526915845fbd6837bcee1"} Feb 19 06:41:26 crc kubenswrapper[5012]: I0219 06:41:26.945627 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66cmz" event={"ID":"61ce3f5e-028c-4b7f-a71e-a92e3d856c23","Type":"ContainerStarted","Data":"c02346b5da5965008cd2f931a88dfe00233b3fb1cb1efe9113a690b73d595fc4"} Feb 19 06:41:26 crc kubenswrapper[5012]: I0219 06:41:26.948099 5012 generic.go:334] "Generic (PLEG): container finished" podID="6841b536-8c98-4aad-9989-b588d892ff31" containerID="baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438" exitCode=0 Feb 19 06:41:26 crc kubenswrapper[5012]: I0219 06:41:26.948133 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf4n4" event={"ID":"6841b536-8c98-4aad-9989-b588d892ff31","Type":"ContainerDied","Data":"baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438"} Feb 19 06:41:28 crc kubenswrapper[5012]: I0219 06:41:28.965904 5012 generic.go:334] "Generic (PLEG): container finished" podID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerID="c02346b5da5965008cd2f931a88dfe00233b3fb1cb1efe9113a690b73d595fc4" exitCode=0 Feb 19 06:41:28 crc kubenswrapper[5012]: I0219 06:41:28.966022 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66cmz" event={"ID":"61ce3f5e-028c-4b7f-a71e-a92e3d856c23","Type":"ContainerDied","Data":"c02346b5da5965008cd2f931a88dfe00233b3fb1cb1efe9113a690b73d595fc4"} Feb 19 06:41:28 crc kubenswrapper[5012]: I0219 06:41:28.970791 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf4n4" event={"ID":"6841b536-8c98-4aad-9989-b588d892ff31","Type":"ContainerStarted","Data":"61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148"} Feb 19 06:41:29 crc kubenswrapper[5012]: I0219 06:41:29.010038 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tf4n4" podStartSLOduration=2.993090319 podStartE2EDuration="7.010022682s" podCreationTimestamp="2026-02-19 06:41:22 +0000 UTC" firstStartedPulling="2026-02-19 06:41:23.911888223 +0000 UTC m=+4579.945210832" lastFinishedPulling="2026-02-19 06:41:27.928820626 +0000 UTC m=+4583.962143195" observedRunningTime="2026-02-19 06:41:29.004882709 +0000 UTC m=+4585.038205278" watchObservedRunningTime="2026-02-19 06:41:29.010022682 +0000 UTC m=+4585.043345251" Feb 19 06:41:31 crc kubenswrapper[5012]: I0219 06:41:31.004020 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66cmz" event={"ID":"61ce3f5e-028c-4b7f-a71e-a92e3d856c23","Type":"ContainerStarted","Data":"24edfd5d55e482195adaebf9b5b91b0fd7806997553ad21de551686b0c104daa"} Feb 19 06:41:31 crc kubenswrapper[5012]: I0219 06:41:31.041070 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-66cmz" podStartSLOduration=3.470736904 podStartE2EDuration="7.041040448s" podCreationTimestamp="2026-02-19 06:41:24 +0000 UTC" firstStartedPulling="2026-02-19 06:41:25.93791197 +0000 UTC m=+4581.971234539" lastFinishedPulling="2026-02-19 06:41:29.508215504 +0000 UTC m=+4585.541538083" observedRunningTime="2026-02-19 06:41:31.031494229 +0000 UTC m=+4587.064816808" watchObservedRunningTime="2026-02-19 06:41:31.041040448 +0000 UTC m=+4587.074363057" Feb 19 06:41:32 crc kubenswrapper[5012]: I0219 06:41:32.747408 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:32 crc kubenswrapper[5012]: I0219 06:41:32.747944 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:32 crc kubenswrapper[5012]: I0219 06:41:32.816406 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:33 crc kubenswrapper[5012]: I0219 06:41:33.121694 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:34 crc kubenswrapper[5012]: I0219 06:41:34.794666 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tf4n4"] Feb 19 06:41:34 crc kubenswrapper[5012]: I0219 06:41:34.973835 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:34 crc kubenswrapper[5012]: I0219 06:41:34.974118 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.053982 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tf4n4" podUID="6841b536-8c98-4aad-9989-b588d892ff31" containerName="registry-server" containerID="cri-o://61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148" gracePeriod=2 Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.611028 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.671840 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kld54\" (UniqueName: \"kubernetes.io/projected/6841b536-8c98-4aad-9989-b588d892ff31-kube-api-access-kld54\") pod \"6841b536-8c98-4aad-9989-b588d892ff31\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.671993 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-utilities\") pod \"6841b536-8c98-4aad-9989-b588d892ff31\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.672094 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-catalog-content\") pod \"6841b536-8c98-4aad-9989-b588d892ff31\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.673435 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-utilities" (OuterVolumeSpecName: "utilities") pod "6841b536-8c98-4aad-9989-b588d892ff31" (UID: "6841b536-8c98-4aad-9989-b588d892ff31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.682062 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6841b536-8c98-4aad-9989-b588d892ff31-kube-api-access-kld54" (OuterVolumeSpecName: "kube-api-access-kld54") pod "6841b536-8c98-4aad-9989-b588d892ff31" (UID: "6841b536-8c98-4aad-9989-b588d892ff31"). InnerVolumeSpecName "kube-api-access-kld54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.722186 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6841b536-8c98-4aad-9989-b588d892ff31" (UID: "6841b536-8c98-4aad-9989-b588d892ff31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.775746 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kld54\" (UniqueName: \"kubernetes.io/projected/6841b536-8c98-4aad-9989-b588d892ff31-kube-api-access-kld54\") on node \"crc\" DevicePath \"\"" Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.775804 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.775824 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.028891 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-66cmz" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="registry-server" probeResult="failure" output=< Feb 19 06:41:36 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 06:41:36 crc kubenswrapper[5012]: > Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.066443 5012 generic.go:334] "Generic (PLEG): container finished" podID="6841b536-8c98-4aad-9989-b588d892ff31" containerID="61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148" exitCode=0 Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.066486 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf4n4" event={"ID":"6841b536-8c98-4aad-9989-b588d892ff31","Type":"ContainerDied","Data":"61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148"} Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.066514 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf4n4" event={"ID":"6841b536-8c98-4aad-9989-b588d892ff31","Type":"ContainerDied","Data":"942c30e93ef58c6bb6ca3087e3a4abe40e69b68204d55971a374b9fa9499bfc6"} Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.066532 5012 scope.go:117] "RemoveContainer" containerID="61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.067022 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.090608 5012 scope.go:117] "RemoveContainer" containerID="baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.109336 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tf4n4"] Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.123088 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tf4n4"] Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.146091 5012 scope.go:117] "RemoveContainer" containerID="d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.173913 5012 scope.go:117] "RemoveContainer" containerID="61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148" Feb 19 06:41:36 crc kubenswrapper[5012]: E0219 06:41:36.174498 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148\": container with ID starting with 61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148 not found: ID does not exist" containerID="61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.174548 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148"} err="failed to get container status \"61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148\": rpc error: code = NotFound desc = could not find container \"61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148\": container with ID starting with 61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148 not found: ID does not exist" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.174575 5012 scope.go:117] "RemoveContainer" containerID="baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438" Feb 19 06:41:36 crc kubenswrapper[5012]: E0219 06:41:36.175022 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438\": container with ID starting with baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438 not found: ID does not exist" containerID="baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.175058 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438"} err="failed to get container status \"baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438\": rpc error: code = NotFound desc = could not find container \"baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438\": container with ID starting with baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438 not found: ID does not exist" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.175082 5012 scope.go:117] "RemoveContainer" containerID="d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166" Feb 19 06:41:36 crc kubenswrapper[5012]: E0219 06:41:36.175373 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166\": container with ID starting with d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166 not found: ID does not exist" containerID="d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.175397 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166"} err="failed to get container status \"d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166\": rpc error: code = NotFound desc = could not find container \"d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166\": container with ID starting with d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166 not found: ID does not exist" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.723770 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6841b536-8c98-4aad-9989-b588d892ff31" path="/var/lib/kubelet/pods/6841b536-8c98-4aad-9989-b588d892ff31/volumes" Feb 19 06:41:43 crc kubenswrapper[5012]: I0219 06:41:43.726693 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="1fd0c672-e258-4feb-8bbd-26135f92f7fb" containerName="galera" probeResult="failure" output="command timed out" Feb 19 06:41:43 crc kubenswrapper[5012]: I0219 06:41:43.726873 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="1fd0c672-e258-4feb-8bbd-26135f92f7fb" containerName="galera" probeResult="failure" output="command timed out" Feb 19 06:41:45 crc kubenswrapper[5012]: I0219 06:41:45.076288 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:45 crc kubenswrapper[5012]: I0219 06:41:45.151154 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:45 crc kubenswrapper[5012]: I0219 06:41:45.366106 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-66cmz"] Feb 19 06:41:46 crc kubenswrapper[5012]: I0219 06:41:46.199117 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-66cmz" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="registry-server" containerID="cri-o://24edfd5d55e482195adaebf9b5b91b0fd7806997553ad21de551686b0c104daa" gracePeriod=2 Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.210951 5012 generic.go:334] "Generic (PLEG): container finished" podID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerID="24edfd5d55e482195adaebf9b5b91b0fd7806997553ad21de551686b0c104daa" exitCode=0 Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.211039 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66cmz" event={"ID":"61ce3f5e-028c-4b7f-a71e-a92e3d856c23","Type":"ContainerDied","Data":"24edfd5d55e482195adaebf9b5b91b0fd7806997553ad21de551686b0c104daa"} Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.712801 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.815336 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-utilities\") pod \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.815404 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ghkp\" (UniqueName: \"kubernetes.io/projected/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-kube-api-access-2ghkp\") pod \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.815460 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-catalog-content\") pod \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.818003 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-utilities" (OuterVolumeSpecName: "utilities") pod "61ce3f5e-028c-4b7f-a71e-a92e3d856c23" (UID: "61ce3f5e-028c-4b7f-a71e-a92e3d856c23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.827831 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-kube-api-access-2ghkp" (OuterVolumeSpecName: "kube-api-access-2ghkp") pod "61ce3f5e-028c-4b7f-a71e-a92e3d856c23" (UID: "61ce3f5e-028c-4b7f-a71e-a92e3d856c23"). InnerVolumeSpecName "kube-api-access-2ghkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.918472 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.918530 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ghkp\" (UniqueName: \"kubernetes.io/projected/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-kube-api-access-2ghkp\") on node \"crc\" DevicePath \"\"" Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.936167 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61ce3f5e-028c-4b7f-a71e-a92e3d856c23" (UID: "61ce3f5e-028c-4b7f-a71e-a92e3d856c23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.020707 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.227455 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66cmz" event={"ID":"61ce3f5e-028c-4b7f-a71e-a92e3d856c23","Type":"ContainerDied","Data":"2624cc6ec470f8dc84d3c7b03def5d04d47434c9b3d526915845fbd6837bcee1"} Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.227535 5012 scope.go:117] "RemoveContainer" containerID="24edfd5d55e482195adaebf9b5b91b0fd7806997553ad21de551686b0c104daa" Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.227574 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.271057 5012 scope.go:117] "RemoveContainer" containerID="c02346b5da5965008cd2f931a88dfe00233b3fb1cb1efe9113a690b73d595fc4" Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.281712 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-66cmz"] Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.291873 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-66cmz"] Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.310000 5012 scope.go:117] "RemoveContainer" containerID="a87d0a7fdd603f20783ea6ff56dc5dc304c40610e8dbadc8d1e3e50edb7634ba" Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.727840 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" path="/var/lib/kubelet/pods/61ce3f5e-028c-4b7f-a71e-a92e3d856c23/volumes" Feb 19 06:43:14 crc kubenswrapper[5012]: I0219 06:43:14.431249 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:43:14 crc kubenswrapper[5012]: I0219 06:43:14.431924 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:43:44 crc kubenswrapper[5012]: I0219 06:43:44.431431 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:43:44 crc kubenswrapper[5012]: I0219 06:43:44.432783 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.431169 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.431932 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.432005 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.433134 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00e1065f77f9e7e865aaa5f4d131bac2bd7836a8c4264f89c263a864c8ce750f"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.433287 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://00e1065f77f9e7e865aaa5f4d131bac2bd7836a8c4264f89c263a864c8ce750f" gracePeriod=600 Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.899885 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="00e1065f77f9e7e865aaa5f4d131bac2bd7836a8c4264f89c263a864c8ce750f" exitCode=0 Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.899984 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"00e1065f77f9e7e865aaa5f4d131bac2bd7836a8c4264f89c263a864c8ce750f"} Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.900427 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3"} Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.900469 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.173569 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx"] Feb 19 06:45:00 crc kubenswrapper[5012]: E0219 06:45:00.175260 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6841b536-8c98-4aad-9989-b588d892ff31" containerName="extract-utilities" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.175294 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6841b536-8c98-4aad-9989-b588d892ff31" containerName="extract-utilities" Feb 19 06:45:00 crc kubenswrapper[5012]: E0219 06:45:00.175369 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="extract-utilities" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.175386 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="extract-utilities" Feb 19 06:45:00 crc kubenswrapper[5012]: E0219 06:45:00.175429 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6841b536-8c98-4aad-9989-b588d892ff31" containerName="registry-server" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.175448 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6841b536-8c98-4aad-9989-b588d892ff31" containerName="registry-server" Feb 19 06:45:00 crc kubenswrapper[5012]: E0219 06:45:00.175473 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6841b536-8c98-4aad-9989-b588d892ff31" containerName="extract-content" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.175488 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6841b536-8c98-4aad-9989-b588d892ff31" containerName="extract-content" Feb 19 06:45:00 crc kubenswrapper[5012]: E0219 06:45:00.175526 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="registry-server" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.175542 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="registry-server" Feb 19 06:45:00 crc kubenswrapper[5012]: E0219 06:45:00.175600 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="extract-content" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.175618 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="extract-content" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.176062 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="registry-server" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.176141 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6841b536-8c98-4aad-9989-b588d892ff31" containerName="registry-server" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.177646 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.180903 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.182362 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.187809 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx"] Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.327939 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82vp2\" (UniqueName: \"kubernetes.io/projected/a162758d-5bc7-4bb8-949c-e32d2f33a380-kube-api-access-82vp2\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.328071 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a162758d-5bc7-4bb8-949c-e32d2f33a380-secret-volume\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.328117 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a162758d-5bc7-4bb8-949c-e32d2f33a380-config-volume\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.430234 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82vp2\" (UniqueName: \"kubernetes.io/projected/a162758d-5bc7-4bb8-949c-e32d2f33a380-kube-api-access-82vp2\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.430414 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a162758d-5bc7-4bb8-949c-e32d2f33a380-secret-volume\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.430485 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a162758d-5bc7-4bb8-949c-e32d2f33a380-config-volume\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.431745 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a162758d-5bc7-4bb8-949c-e32d2f33a380-config-volume\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.440013 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a162758d-5bc7-4bb8-949c-e32d2f33a380-secret-volume\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.457212 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82vp2\" (UniqueName: \"kubernetes.io/projected/a162758d-5bc7-4bb8-949c-e32d2f33a380-kube-api-access-82vp2\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.509453 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.826545 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx"] Feb 19 06:45:01 crc kubenswrapper[5012]: I0219 06:45:01.593490 5012 generic.go:334] "Generic (PLEG): container finished" podID="a162758d-5bc7-4bb8-949c-e32d2f33a380" containerID="23b98103b785157c94165d83573686c65c77cd8c456afe6a275faa1a2a6f6d07" exitCode=0 Feb 19 06:45:01 crc kubenswrapper[5012]: I0219 06:45:01.593705 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" event={"ID":"a162758d-5bc7-4bb8-949c-e32d2f33a380","Type":"ContainerDied","Data":"23b98103b785157c94165d83573686c65c77cd8c456afe6a275faa1a2a6f6d07"} Feb 19 06:45:01 crc kubenswrapper[5012]: I0219 06:45:01.594031 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" event={"ID":"a162758d-5bc7-4bb8-949c-e32d2f33a380","Type":"ContainerStarted","Data":"6f1bbee1a753485477679410e5d6880b9435760f5bdd367bcbd68fdfbc9f18e8"} Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.008061 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.190559 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a162758d-5bc7-4bb8-949c-e32d2f33a380-secret-volume\") pod \"a162758d-5bc7-4bb8-949c-e32d2f33a380\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.190813 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a162758d-5bc7-4bb8-949c-e32d2f33a380-config-volume\") pod \"a162758d-5bc7-4bb8-949c-e32d2f33a380\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.190851 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82vp2\" (UniqueName: \"kubernetes.io/projected/a162758d-5bc7-4bb8-949c-e32d2f33a380-kube-api-access-82vp2\") pod \"a162758d-5bc7-4bb8-949c-e32d2f33a380\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.192176 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a162758d-5bc7-4bb8-949c-e32d2f33a380-config-volume" (OuterVolumeSpecName: "config-volume") pod "a162758d-5bc7-4bb8-949c-e32d2f33a380" (UID: "a162758d-5bc7-4bb8-949c-e32d2f33a380"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.197197 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a162758d-5bc7-4bb8-949c-e32d2f33a380-kube-api-access-82vp2" (OuterVolumeSpecName: "kube-api-access-82vp2") pod "a162758d-5bc7-4bb8-949c-e32d2f33a380" (UID: "a162758d-5bc7-4bb8-949c-e32d2f33a380"). InnerVolumeSpecName "kube-api-access-82vp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.197430 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a162758d-5bc7-4bb8-949c-e32d2f33a380-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a162758d-5bc7-4bb8-949c-e32d2f33a380" (UID: "a162758d-5bc7-4bb8-949c-e32d2f33a380"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.293373 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a162758d-5bc7-4bb8-949c-e32d2f33a380-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.293411 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82vp2\" (UniqueName: \"kubernetes.io/projected/a162758d-5bc7-4bb8-949c-e32d2f33a380-kube-api-access-82vp2\") on node \"crc\" DevicePath \"\"" Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.293425 5012 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a162758d-5bc7-4bb8-949c-e32d2f33a380-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.622974 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" event={"ID":"a162758d-5bc7-4bb8-949c-e32d2f33a380","Type":"ContainerDied","Data":"6f1bbee1a753485477679410e5d6880b9435760f5bdd367bcbd68fdfbc9f18e8"} Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.623031 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f1bbee1a753485477679410e5d6880b9435760f5bdd367bcbd68fdfbc9f18e8" Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.623062 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:04 crc kubenswrapper[5012]: I0219 06:45:04.100016 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8"] Feb 19 06:45:04 crc kubenswrapper[5012]: I0219 06:45:04.111213 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8"] Feb 19 06:45:04 crc kubenswrapper[5012]: I0219 06:45:04.718819 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e" path="/var/lib/kubelet/pods/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e/volumes" Feb 19 06:45:39 crc kubenswrapper[5012]: I0219 06:45:39.424444 5012 scope.go:117] "RemoveContainer" containerID="aab8c26b7c272ad359e2397dc4c5c133e04f23474846a5322b643a2a4fdad8bd" Feb 19 06:46:09 crc kubenswrapper[5012]: E0219 06:46:09.062025 5012 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.110:53294->38.102.83.110:36123: write tcp 38.102.83.110:53294->38.102.83.110:36123: write: broken pipe Feb 19 06:46:14 crc kubenswrapper[5012]: I0219 06:46:14.431030 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:46:14 crc kubenswrapper[5012]: I0219 06:46:14.431671 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:46:44 crc kubenswrapper[5012]: I0219 06:46:44.430415 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:46:44 crc kubenswrapper[5012]: I0219 06:46:44.430993 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:47:14 crc kubenswrapper[5012]: I0219 06:47:14.430849 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:47:14 crc kubenswrapper[5012]: I0219 06:47:14.431329 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:47:14 crc kubenswrapper[5012]: I0219 06:47:14.431376 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:47:14 crc kubenswrapper[5012]: I0219 06:47:14.432112 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:47:14 crc kubenswrapper[5012]: I0219 06:47:14.432161 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" gracePeriod=600 Feb 19 06:47:14 crc kubenswrapper[5012]: E0219 06:47:14.565987 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:47:15 crc kubenswrapper[5012]: I0219 06:47:15.179579 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" exitCode=0 Feb 19 06:47:15 crc kubenswrapper[5012]: I0219 06:47:15.179842 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3"} Feb 19 06:47:15 crc kubenswrapper[5012]: I0219 06:47:15.179877 5012 scope.go:117] "RemoveContainer" containerID="00e1065f77f9e7e865aaa5f4d131bac2bd7836a8c4264f89c263a864c8ce750f" Feb 19 06:47:15 crc kubenswrapper[5012]: I0219 06:47:15.180890 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:47:15 crc kubenswrapper[5012]: E0219 06:47:15.181435 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:47:28 crc kubenswrapper[5012]: I0219 06:47:28.703272 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:47:28 crc kubenswrapper[5012]: E0219 06:47:28.704279 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:47:40 crc kubenswrapper[5012]: I0219 06:47:40.702789 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:47:40 crc kubenswrapper[5012]: E0219 06:47:40.703420 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:47:51 crc kubenswrapper[5012]: I0219 06:47:51.703753 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:47:51 crc kubenswrapper[5012]: E0219 06:47:51.704731 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:48:02 crc kubenswrapper[5012]: I0219 06:48:02.704506 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:48:02 crc kubenswrapper[5012]: E0219 06:48:02.707171 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:48:17 crc kubenswrapper[5012]: I0219 06:48:17.704498 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:48:17 crc kubenswrapper[5012]: E0219 06:48:17.706021 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:48:28 crc kubenswrapper[5012]: I0219 06:48:28.703362 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:48:28 crc kubenswrapper[5012]: E0219 06:48:28.704407 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:48:41 crc kubenswrapper[5012]: I0219 06:48:41.702687 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:48:41 crc kubenswrapper[5012]: E0219 06:48:41.703585 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:48:55 crc kubenswrapper[5012]: I0219 06:48:55.703537 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:48:55 crc kubenswrapper[5012]: E0219 06:48:55.704808 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:49:08 crc kubenswrapper[5012]: I0219 06:49:08.703769 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:49:08 crc kubenswrapper[5012]: E0219 06:49:08.704688 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:49:22 crc kubenswrapper[5012]: I0219 06:49:22.703681 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:49:22 crc kubenswrapper[5012]: E0219 06:49:22.704802 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:49:37 crc kubenswrapper[5012]: I0219 06:49:37.704230 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:49:37 crc kubenswrapper[5012]: E0219 06:49:37.706376 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:49:52 crc kubenswrapper[5012]: I0219 06:49:52.703174 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:49:52 crc kubenswrapper[5012]: E0219 06:49:52.706622 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:50:05 crc kubenswrapper[5012]: I0219 06:50:05.703056 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:50:05 crc kubenswrapper[5012]: E0219 06:50:05.704639 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.168880 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7tqft"] Feb 19 06:50:15 crc kubenswrapper[5012]: E0219 06:50:15.170105 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a162758d-5bc7-4bb8-949c-e32d2f33a380" containerName="collect-profiles" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.170128 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a162758d-5bc7-4bb8-949c-e32d2f33a380" containerName="collect-profiles" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.170581 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a162758d-5bc7-4bb8-949c-e32d2f33a380" containerName="collect-profiles" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.172982 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.184937 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7tqft"] Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.235506 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnlw6\" (UniqueName: \"kubernetes.io/projected/91142b0c-3f47-468d-b976-121a1a8afb9a-kube-api-access-lnlw6\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.235663 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-catalog-content\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.235729 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-utilities\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.337441 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnlw6\" (UniqueName: \"kubernetes.io/projected/91142b0c-3f47-468d-b976-121a1a8afb9a-kube-api-access-lnlw6\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.337547 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-catalog-content\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.337593 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-utilities\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.338129 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-utilities\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.338282 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-catalog-content\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.357912 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnlw6\" (UniqueName: \"kubernetes.io/projected/91142b0c-3f47-468d-b976-121a1a8afb9a-kube-api-access-lnlw6\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.525480 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:16 crc kubenswrapper[5012]: I0219 06:50:16.064024 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7tqft"] Feb 19 06:50:16 crc kubenswrapper[5012]: I0219 06:50:16.556468 5012 generic.go:334] "Generic (PLEG): container finished" podID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerID="580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90" exitCode=0 Feb 19 06:50:16 crc kubenswrapper[5012]: I0219 06:50:16.556587 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tqft" event={"ID":"91142b0c-3f47-468d-b976-121a1a8afb9a","Type":"ContainerDied","Data":"580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90"} Feb 19 06:50:16 crc kubenswrapper[5012]: I0219 06:50:16.557293 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tqft" event={"ID":"91142b0c-3f47-468d-b976-121a1a8afb9a","Type":"ContainerStarted","Data":"1ae9311f07ab91b8cc6e0a3d9e7f87d400993f9f8f9110238874445a3119b52d"} Feb 19 06:50:16 crc kubenswrapper[5012]: I0219 06:50:16.559557 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:50:17 crc kubenswrapper[5012]: I0219 06:50:17.572760 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tqft" event={"ID":"91142b0c-3f47-468d-b976-121a1a8afb9a","Type":"ContainerStarted","Data":"c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e"} Feb 19 06:50:18 crc kubenswrapper[5012]: I0219 06:50:18.593665 5012 generic.go:334] "Generic (PLEG): container finished" podID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerID="c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e" exitCode=0 Feb 19 06:50:18 crc kubenswrapper[5012]: I0219 06:50:18.593750 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tqft" event={"ID":"91142b0c-3f47-468d-b976-121a1a8afb9a","Type":"ContainerDied","Data":"c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e"} Feb 19 06:50:20 crc kubenswrapper[5012]: I0219 06:50:20.623316 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tqft" event={"ID":"91142b0c-3f47-468d-b976-121a1a8afb9a","Type":"ContainerStarted","Data":"93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7"} Feb 19 06:50:20 crc kubenswrapper[5012]: I0219 06:50:20.664297 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7tqft" podStartSLOduration=3.242569656 podStartE2EDuration="5.664270749s" podCreationTimestamp="2026-02-19 06:50:15 +0000 UTC" firstStartedPulling="2026-02-19 06:50:16.559087692 +0000 UTC m=+5112.592410291" lastFinishedPulling="2026-02-19 06:50:18.980788785 +0000 UTC m=+5115.014111384" observedRunningTime="2026-02-19 06:50:20.645447155 +0000 UTC m=+5116.678769744" watchObservedRunningTime="2026-02-19 06:50:20.664270749 +0000 UTC m=+5116.697593358" Feb 19 06:50:20 crc kubenswrapper[5012]: I0219 06:50:20.703779 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:50:20 crc kubenswrapper[5012]: E0219 06:50:20.704084 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:50:25 crc kubenswrapper[5012]: I0219 06:50:25.525633 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:25 crc kubenswrapper[5012]: I0219 06:50:25.526126 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:25 crc kubenswrapper[5012]: I0219 06:50:25.591775 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:25 crc kubenswrapper[5012]: I0219 06:50:25.734044 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:25 crc kubenswrapper[5012]: I0219 06:50:25.834549 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7tqft"] Feb 19 06:50:27 crc kubenswrapper[5012]: I0219 06:50:27.697959 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7tqft" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerName="registry-server" containerID="cri-o://93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7" gracePeriod=2 Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.271426 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.424773 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnlw6\" (UniqueName: \"kubernetes.io/projected/91142b0c-3f47-468d-b976-121a1a8afb9a-kube-api-access-lnlw6\") pod \"91142b0c-3f47-468d-b976-121a1a8afb9a\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.425004 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-catalog-content\") pod \"91142b0c-3f47-468d-b976-121a1a8afb9a\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.425049 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-utilities\") pod \"91142b0c-3f47-468d-b976-121a1a8afb9a\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.425889 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-utilities" (OuterVolumeSpecName: "utilities") pod "91142b0c-3f47-468d-b976-121a1a8afb9a" (UID: "91142b0c-3f47-468d-b976-121a1a8afb9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.434369 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91142b0c-3f47-468d-b976-121a1a8afb9a-kube-api-access-lnlw6" (OuterVolumeSpecName: "kube-api-access-lnlw6") pod "91142b0c-3f47-468d-b976-121a1a8afb9a" (UID: "91142b0c-3f47-468d-b976-121a1a8afb9a"). InnerVolumeSpecName "kube-api-access-lnlw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.472214 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91142b0c-3f47-468d-b976-121a1a8afb9a" (UID: "91142b0c-3f47-468d-b976-121a1a8afb9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.527148 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.527175 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.527187 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnlw6\" (UniqueName: \"kubernetes.io/projected/91142b0c-3f47-468d-b976-121a1a8afb9a-kube-api-access-lnlw6\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.716615 5012 generic.go:334] "Generic (PLEG): container finished" podID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerID="93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7" exitCode=0 Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.716808 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.739151 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tqft" event={"ID":"91142b0c-3f47-468d-b976-121a1a8afb9a","Type":"ContainerDied","Data":"93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7"} Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.739195 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tqft" event={"ID":"91142b0c-3f47-468d-b976-121a1a8afb9a","Type":"ContainerDied","Data":"1ae9311f07ab91b8cc6e0a3d9e7f87d400993f9f8f9110238874445a3119b52d"} Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.739211 5012 scope.go:117] "RemoveContainer" containerID="93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.785449 5012 scope.go:117] "RemoveContainer" containerID="c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.787009 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7tqft"] Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.799159 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7tqft"] Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.812067 5012 scope.go:117] "RemoveContainer" containerID="580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.852418 5012 scope.go:117] "RemoveContainer" containerID="93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7" Feb 19 06:50:28 crc kubenswrapper[5012]: E0219 06:50:28.852942 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7\": container with ID starting with 93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7 not found: ID does not exist" containerID="93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.853009 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7"} err="failed to get container status \"93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7\": rpc error: code = NotFound desc = could not find container \"93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7\": container with ID starting with 93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7 not found: ID does not exist" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.853035 5012 scope.go:117] "RemoveContainer" containerID="c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e" Feb 19 06:50:28 crc kubenswrapper[5012]: E0219 06:50:28.853871 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e\": container with ID starting with c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e not found: ID does not exist" containerID="c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.853910 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e"} err="failed to get container status \"c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e\": rpc error: code = NotFound desc = could not find container \"c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e\": container with ID starting with c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e not found: ID does not exist" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.853939 5012 scope.go:117] "RemoveContainer" containerID="580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90" Feb 19 06:50:28 crc kubenswrapper[5012]: E0219 06:50:28.854433 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90\": container with ID starting with 580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90 not found: ID does not exist" containerID="580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.854735 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90"} err="failed to get container status \"580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90\": rpc error: code = NotFound desc = could not find container \"580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90\": container with ID starting with 580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90 not found: ID does not exist" Feb 19 06:50:30 crc kubenswrapper[5012]: I0219 06:50:30.713806 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" path="/var/lib/kubelet/pods/91142b0c-3f47-468d-b976-121a1a8afb9a/volumes" Feb 19 06:50:32 crc kubenswrapper[5012]: I0219 06:50:32.765749 5012 generic.go:334] "Generic (PLEG): container finished" podID="54eccb09-b3ec-45bc-8065-4c5eb9516257" containerID="45a71cb7a299afd86b43701046f8b7c089e907df4ed4d824464d2883ac4074ea" exitCode=0 Feb 19 06:50:32 crc kubenswrapper[5012]: I0219 06:50:32.765867 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"54eccb09-b3ec-45bc-8065-4c5eb9516257","Type":"ContainerDied","Data":"45a71cb7a299afd86b43701046f8b7c089e907df4ed4d824464d2883ac4074ea"} Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.231197 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268181 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-workdir\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268220 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-temporary\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268249 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-config-data\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268281 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8klk\" (UniqueName: \"kubernetes.io/projected/54eccb09-b3ec-45bc-8065-4c5eb9516257-kube-api-access-b8klk\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268426 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ca-certs\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268584 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ssh-key\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268648 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268695 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config-secret\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268764 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268957 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.269348 5012 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.269468 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-config-data" (OuterVolumeSpecName: "config-data") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.278579 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.287496 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.288359 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54eccb09-b3ec-45bc-8065-4c5eb9516257-kube-api-access-b8klk" (OuterVolumeSpecName: "kube-api-access-b8klk") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "kube-api-access-b8klk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.314729 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.322051 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.334547 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.360380 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.371067 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.371122 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.371133 5012 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.371144 5012 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.371154 5012 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.371163 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.371173 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8klk\" (UniqueName: \"kubernetes.io/projected/54eccb09-b3ec-45bc-8065-4c5eb9516257-kube-api-access-b8klk\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.371183 5012 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.395931 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.472848 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.709962 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:50:34 crc kubenswrapper[5012]: E0219 06:50:34.710659 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.786224 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"54eccb09-b3ec-45bc-8065-4c5eb9516257","Type":"ContainerDied","Data":"4d40402dd6566caf396779f17c2dbfad2df685b1f64caf3b6b294fc60c0aaaea"} Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.786281 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d40402dd6566caf396779f17c2dbfad2df685b1f64caf3b6b294fc60c0aaaea" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.786594 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.876078 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 06:50:42 crc kubenswrapper[5012]: E0219 06:50:42.877162 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerName="registry-server" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.877180 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerName="registry-server" Feb 19 06:50:42 crc kubenswrapper[5012]: E0219 06:50:42.877205 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerName="extract-content" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.877212 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerName="extract-content" Feb 19 06:50:42 crc kubenswrapper[5012]: E0219 06:50:42.877239 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerName="extract-utilities" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.877248 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerName="extract-utilities" Feb 19 06:50:42 crc kubenswrapper[5012]: E0219 06:50:42.877272 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54eccb09-b3ec-45bc-8065-4c5eb9516257" containerName="tempest-tests-tempest-tests-runner" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.877280 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="54eccb09-b3ec-45bc-8065-4c5eb9516257" containerName="tempest-tests-tempest-tests-runner" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.877628 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="54eccb09-b3ec-45bc-8065-4c5eb9516257" containerName="tempest-tests-tempest-tests-runner" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.877646 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerName="registry-server" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.878492 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.882376 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-s2ths" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.888203 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.011382 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfw56\" (UniqueName: \"kubernetes.io/projected/78c125a8-bf69-4524-9b70-be9fe9f313e7-kube-api-access-rfw56\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78c125a8-bf69-4524-9b70-be9fe9f313e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.012021 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78c125a8-bf69-4524-9b70-be9fe9f313e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.114412 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78c125a8-bf69-4524-9b70-be9fe9f313e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.114591 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfw56\" (UniqueName: \"kubernetes.io/projected/78c125a8-bf69-4524-9b70-be9fe9f313e7-kube-api-access-rfw56\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78c125a8-bf69-4524-9b70-be9fe9f313e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.115016 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78c125a8-bf69-4524-9b70-be9fe9f313e7\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.148471 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfw56\" (UniqueName: \"kubernetes.io/projected/78c125a8-bf69-4524-9b70-be9fe9f313e7-kube-api-access-rfw56\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78c125a8-bf69-4524-9b70-be9fe9f313e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.161962 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78c125a8-bf69-4524-9b70-be9fe9f313e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.228128 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:43 crc kubenswrapper[5012]: W0219 06:50:43.740134 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c125a8_bf69_4524_9b70_be9fe9f313e7.slice/crio-e8c180c988dc2a116669d5fc6c228b239b87284096937552d3fce9967c06c195 WatchSource:0}: Error finding container e8c180c988dc2a116669d5fc6c228b239b87284096937552d3fce9967c06c195: Status 404 returned error can't find the container with id e8c180c988dc2a116669d5fc6c228b239b87284096937552d3fce9967c06c195 Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.745191 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.944392 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"78c125a8-bf69-4524-9b70-be9fe9f313e7","Type":"ContainerStarted","Data":"e8c180c988dc2a116669d5fc6c228b239b87284096937552d3fce9967c06c195"} Feb 19 06:50:44 crc kubenswrapper[5012]: I0219 06:50:44.957981 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"78c125a8-bf69-4524-9b70-be9fe9f313e7","Type":"ContainerStarted","Data":"6e96d1e4f9b7f9a554bfb367924f09fe9252e50f14f75ecf0e1e186fb76f5965"} Feb 19 06:50:44 crc kubenswrapper[5012]: I0219 06:50:44.977832 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.158438272 podStartE2EDuration="2.977800788s" podCreationTimestamp="2026-02-19 06:50:42 +0000 UTC" firstStartedPulling="2026-02-19 06:50:43.74325462 +0000 UTC m=+5139.776577199" lastFinishedPulling="2026-02-19 06:50:44.562617146 +0000 UTC m=+5140.595939715" observedRunningTime="2026-02-19 06:50:44.971728051 +0000 UTC m=+5141.005050700" watchObservedRunningTime="2026-02-19 06:50:44.977800788 +0000 UTC m=+5141.011123397" Feb 19 06:50:45 crc kubenswrapper[5012]: I0219 06:50:45.703503 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:50:45 crc kubenswrapper[5012]: E0219 06:50:45.704262 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.398481 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hz76m"] Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.402102 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.413456 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hz76m"] Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.488897 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-utilities\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.592003 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-catalog-content\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.592203 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-utilities\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.592411 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4xtk\" (UniqueName: \"kubernetes.io/projected/d5865e83-a688-4445-8b42-3ebaf9f9c74e-kube-api-access-h4xtk\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.592874 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-utilities\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.693707 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4xtk\" (UniqueName: \"kubernetes.io/projected/d5865e83-a688-4445-8b42-3ebaf9f9c74e-kube-api-access-h4xtk\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.693862 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-catalog-content\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.694379 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-catalog-content\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.717396 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4xtk\" (UniqueName: \"kubernetes.io/projected/d5865e83-a688-4445-8b42-3ebaf9f9c74e-kube-api-access-h4xtk\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.723170 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:50 crc kubenswrapper[5012]: I0219 06:50:50.239902 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hz76m"] Feb 19 06:50:51 crc kubenswrapper[5012]: I0219 06:50:51.036809 5012 generic.go:334] "Generic (PLEG): container finished" podID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerID="6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec" exitCode=0 Feb 19 06:50:51 crc kubenswrapper[5012]: I0219 06:50:51.036866 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hz76m" event={"ID":"d5865e83-a688-4445-8b42-3ebaf9f9c74e","Type":"ContainerDied","Data":"6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec"} Feb 19 06:50:51 crc kubenswrapper[5012]: I0219 06:50:51.037391 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hz76m" event={"ID":"d5865e83-a688-4445-8b42-3ebaf9f9c74e","Type":"ContainerStarted","Data":"18947d88a25745afff68df1e41694c114c48442134135268e3638f0b3c1c1e62"} Feb 19 06:50:52 crc kubenswrapper[5012]: I0219 06:50:52.048738 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hz76m" event={"ID":"d5865e83-a688-4445-8b42-3ebaf9f9c74e","Type":"ContainerStarted","Data":"5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39"} Feb 19 06:50:53 crc kubenswrapper[5012]: I0219 06:50:53.066736 5012 generic.go:334] "Generic (PLEG): container finished" podID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerID="5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39" exitCode=0 Feb 19 06:50:53 crc kubenswrapper[5012]: I0219 06:50:53.066810 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hz76m" event={"ID":"d5865e83-a688-4445-8b42-3ebaf9f9c74e","Type":"ContainerDied","Data":"5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39"} Feb 19 06:50:54 crc kubenswrapper[5012]: I0219 06:50:54.079859 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hz76m" event={"ID":"d5865e83-a688-4445-8b42-3ebaf9f9c74e","Type":"ContainerStarted","Data":"8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83"} Feb 19 06:50:54 crc kubenswrapper[5012]: I0219 06:50:54.108364 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hz76m" podStartSLOduration=2.635609689 podStartE2EDuration="5.108338033s" podCreationTimestamp="2026-02-19 06:50:49 +0000 UTC" firstStartedPulling="2026-02-19 06:50:51.039968432 +0000 UTC m=+5147.073291001" lastFinishedPulling="2026-02-19 06:50:53.512696736 +0000 UTC m=+5149.546019345" observedRunningTime="2026-02-19 06:50:54.099768816 +0000 UTC m=+5150.133091415" watchObservedRunningTime="2026-02-19 06:50:54.108338033 +0000 UTC m=+5150.141660642" Feb 19 06:50:59 crc kubenswrapper[5012]: I0219 06:50:59.723781 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:59 crc kubenswrapper[5012]: I0219 06:50:59.724272 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:59 crc kubenswrapper[5012]: I0219 06:50:59.780475 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:51:00 crc kubenswrapper[5012]: I0219 06:51:00.234655 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:51:00 crc kubenswrapper[5012]: I0219 06:51:00.315559 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hz76m"] Feb 19 06:51:00 crc kubenswrapper[5012]: I0219 06:51:00.703489 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:51:00 crc kubenswrapper[5012]: E0219 06:51:00.704060 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.173997 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hz76m" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerName="registry-server" containerID="cri-o://8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83" gracePeriod=2 Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.695982 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.802398 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-catalog-content\") pod \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.802858 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4xtk\" (UniqueName: \"kubernetes.io/projected/d5865e83-a688-4445-8b42-3ebaf9f9c74e-kube-api-access-h4xtk\") pod \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.803028 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-utilities\") pod \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.803691 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-utilities" (OuterVolumeSpecName: "utilities") pod "d5865e83-a688-4445-8b42-3ebaf9f9c74e" (UID: "d5865e83-a688-4445-8b42-3ebaf9f9c74e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.804802 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.811803 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5865e83-a688-4445-8b42-3ebaf9f9c74e-kube-api-access-h4xtk" (OuterVolumeSpecName: "kube-api-access-h4xtk") pod "d5865e83-a688-4445-8b42-3ebaf9f9c74e" (UID: "d5865e83-a688-4445-8b42-3ebaf9f9c74e"). InnerVolumeSpecName "kube-api-access-h4xtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.855655 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5865e83-a688-4445-8b42-3ebaf9f9c74e" (UID: "d5865e83-a688-4445-8b42-3ebaf9f9c74e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.907123 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.907166 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4xtk\" (UniqueName: \"kubernetes.io/projected/d5865e83-a688-4445-8b42-3ebaf9f9c74e-kube-api-access-h4xtk\") on node \"crc\" DevicePath \"\"" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.188640 5012 generic.go:334] "Generic (PLEG): container finished" podID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerID="8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83" exitCode=0 Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.188715 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.188743 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hz76m" event={"ID":"d5865e83-a688-4445-8b42-3ebaf9f9c74e","Type":"ContainerDied","Data":"8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83"} Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.189151 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hz76m" event={"ID":"d5865e83-a688-4445-8b42-3ebaf9f9c74e","Type":"ContainerDied","Data":"18947d88a25745afff68df1e41694c114c48442134135268e3638f0b3c1c1e62"} Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.189181 5012 scope.go:117] "RemoveContainer" containerID="8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.229089 5012 scope.go:117] "RemoveContainer" containerID="5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.258881 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hz76m"] Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.270659 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hz76m"] Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.276129 5012 scope.go:117] "RemoveContainer" containerID="6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.323268 5012 scope.go:117] "RemoveContainer" containerID="8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83" Feb 19 06:51:03 crc kubenswrapper[5012]: E0219 06:51:03.323900 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83\": container with ID starting with 8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83 not found: ID does not exist" containerID="8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.323949 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83"} err="failed to get container status \"8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83\": rpc error: code = NotFound desc = could not find container \"8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83\": container with ID starting with 8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83 not found: ID does not exist" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.323977 5012 scope.go:117] "RemoveContainer" containerID="5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39" Feb 19 06:51:03 crc kubenswrapper[5012]: E0219 06:51:03.324347 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39\": container with ID starting with 5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39 not found: ID does not exist" containerID="5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.324373 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39"} err="failed to get container status \"5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39\": rpc error: code = NotFound desc = could not find container \"5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39\": container with ID starting with 5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39 not found: ID does not exist" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.324385 5012 scope.go:117] "RemoveContainer" containerID="6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec" Feb 19 06:51:03 crc kubenswrapper[5012]: E0219 06:51:03.324765 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec\": container with ID starting with 6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec not found: ID does not exist" containerID="6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.324795 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec"} err="failed to get container status \"6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec\": rpc error: code = NotFound desc = could not find container \"6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec\": container with ID starting with 6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec not found: ID does not exist" Feb 19 06:51:04 crc kubenswrapper[5012]: I0219 06:51:04.738033 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" path="/var/lib/kubelet/pods/d5865e83-a688-4445-8b42-3ebaf9f9c74e/volumes" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.356353 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hncx9/must-gather-gs9fs"] Feb 19 06:51:07 crc kubenswrapper[5012]: E0219 06:51:07.357248 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerName="extract-utilities" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.357265 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerName="extract-utilities" Feb 19 06:51:07 crc kubenswrapper[5012]: E0219 06:51:07.357289 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerName="registry-server" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.357298 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerName="registry-server" Feb 19 06:51:07 crc kubenswrapper[5012]: E0219 06:51:07.357337 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerName="extract-content" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.357345 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerName="extract-content" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.357655 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerName="registry-server" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.359167 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.361729 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hncx9"/"kube-root-ca.crt" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.361832 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-hncx9"/"default-dockercfg-wpbmp" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.362736 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hncx9"/"openshift-service-ca.crt" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.366025 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hncx9/must-gather-gs9fs"] Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.413821 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzz4l\" (UniqueName: \"kubernetes.io/projected/5afd9390-aa19-4b48-b659-089e59ea82e5-kube-api-access-vzz4l\") pod \"must-gather-gs9fs\" (UID: \"5afd9390-aa19-4b48-b659-089e59ea82e5\") " pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.413907 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5afd9390-aa19-4b48-b659-089e59ea82e5-must-gather-output\") pod \"must-gather-gs9fs\" (UID: \"5afd9390-aa19-4b48-b659-089e59ea82e5\") " pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.526490 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzz4l\" (UniqueName: \"kubernetes.io/projected/5afd9390-aa19-4b48-b659-089e59ea82e5-kube-api-access-vzz4l\") pod \"must-gather-gs9fs\" (UID: \"5afd9390-aa19-4b48-b659-089e59ea82e5\") " pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.526609 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5afd9390-aa19-4b48-b659-089e59ea82e5-must-gather-output\") pod \"must-gather-gs9fs\" (UID: \"5afd9390-aa19-4b48-b659-089e59ea82e5\") " pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.527111 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5afd9390-aa19-4b48-b659-089e59ea82e5-must-gather-output\") pod \"must-gather-gs9fs\" (UID: \"5afd9390-aa19-4b48-b659-089e59ea82e5\") " pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.559629 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzz4l\" (UniqueName: \"kubernetes.io/projected/5afd9390-aa19-4b48-b659-089e59ea82e5-kube-api-access-vzz4l\") pod \"must-gather-gs9fs\" (UID: \"5afd9390-aa19-4b48-b659-089e59ea82e5\") " pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.686691 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:51:08 crc kubenswrapper[5012]: I0219 06:51:08.232363 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hncx9/must-gather-gs9fs"] Feb 19 06:51:08 crc kubenswrapper[5012]: I0219 06:51:08.252176 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/must-gather-gs9fs" event={"ID":"5afd9390-aa19-4b48-b659-089e59ea82e5","Type":"ContainerStarted","Data":"bc49e8ad6e545cc5030ac5e432a623684663b16c84d6e5b7cede6ad7d29cfea6"} Feb 19 06:51:14 crc kubenswrapper[5012]: I0219 06:51:14.714993 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:51:14 crc kubenswrapper[5012]: E0219 06:51:14.715721 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:51:15 crc kubenswrapper[5012]: I0219 06:51:15.326228 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/must-gather-gs9fs" event={"ID":"5afd9390-aa19-4b48-b659-089e59ea82e5","Type":"ContainerStarted","Data":"1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828"} Feb 19 06:51:16 crc kubenswrapper[5012]: I0219 06:51:16.341371 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/must-gather-gs9fs" event={"ID":"5afd9390-aa19-4b48-b659-089e59ea82e5","Type":"ContainerStarted","Data":"7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48"} Feb 19 06:51:16 crc kubenswrapper[5012]: I0219 06:51:16.362800 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hncx9/must-gather-gs9fs" podStartSLOduration=2.716280245 podStartE2EDuration="9.362780352s" podCreationTimestamp="2026-02-19 06:51:07 +0000 UTC" firstStartedPulling="2026-02-19 06:51:08.225052749 +0000 UTC m=+5164.258375318" lastFinishedPulling="2026-02-19 06:51:14.871552855 +0000 UTC m=+5170.904875425" observedRunningTime="2026-02-19 06:51:16.35979588 +0000 UTC m=+5172.393118469" watchObservedRunningTime="2026-02-19 06:51:16.362780352 +0000 UTC m=+5172.396102921" Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.076417 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hncx9/crc-debug-57vjb"] Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.080367 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.135378 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-host\") pod \"crc-debug-57vjb\" (UID: \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\") " pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.135697 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjgqb\" (UniqueName: \"kubernetes.io/projected/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-kube-api-access-sjgqb\") pod \"crc-debug-57vjb\" (UID: \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\") " pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.237960 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-host\") pod \"crc-debug-57vjb\" (UID: \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\") " pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.238070 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjgqb\" (UniqueName: \"kubernetes.io/projected/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-kube-api-access-sjgqb\") pod \"crc-debug-57vjb\" (UID: \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\") " pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.238152 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-host\") pod \"crc-debug-57vjb\" (UID: \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\") " pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.264485 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjgqb\" (UniqueName: \"kubernetes.io/projected/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-kube-api-access-sjgqb\") pod \"crc-debug-57vjb\" (UID: \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\") " pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.404926 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:51:20 crc kubenswrapper[5012]: W0219 06:51:20.443974 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod809fe06a_5a2d_4ac8_90d0_5a2569f3e116.slice/crio-4610aaf767b371d7342611a4d17e5add337a68f49f5566ada2aafea0d8b89112 WatchSource:0}: Error finding container 4610aaf767b371d7342611a4d17e5add337a68f49f5566ada2aafea0d8b89112: Status 404 returned error can't find the container with id 4610aaf767b371d7342611a4d17e5add337a68f49f5566ada2aafea0d8b89112 Feb 19 06:51:21 crc kubenswrapper[5012]: I0219 06:51:21.388545 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/crc-debug-57vjb" event={"ID":"809fe06a-5a2d-4ac8-90d0-5a2569f3e116","Type":"ContainerStarted","Data":"4610aaf767b371d7342611a4d17e5add337a68f49f5566ada2aafea0d8b89112"} Feb 19 06:51:29 crc kubenswrapper[5012]: I0219 06:51:29.703560 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:51:29 crc kubenswrapper[5012]: E0219 06:51:29.704346 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:51:32 crc kubenswrapper[5012]: I0219 06:51:32.485273 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/crc-debug-57vjb" event={"ID":"809fe06a-5a2d-4ac8-90d0-5a2569f3e116","Type":"ContainerStarted","Data":"eee8e3c869cd66a7f0fdd02a2d1a9b68c170e21d37668f164d794773d0198ed5"} Feb 19 06:51:32 crc kubenswrapper[5012]: I0219 06:51:32.509353 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hncx9/crc-debug-57vjb" podStartSLOduration=1.628782629 podStartE2EDuration="12.509331031s" podCreationTimestamp="2026-02-19 06:51:20 +0000 UTC" firstStartedPulling="2026-02-19 06:51:20.446037146 +0000 UTC m=+5176.479359725" lastFinishedPulling="2026-02-19 06:51:31.326585558 +0000 UTC m=+5187.359908127" observedRunningTime="2026-02-19 06:51:32.498332676 +0000 UTC m=+5188.531655245" watchObservedRunningTime="2026-02-19 06:51:32.509331031 +0000 UTC m=+5188.542653600" Feb 19 06:51:43 crc kubenswrapper[5012]: I0219 06:51:43.703073 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:51:43 crc kubenswrapper[5012]: E0219 06:51:43.704732 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.093170 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-drxrq"] Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.096761 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.111017 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drxrq"] Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.168661 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-catalog-content\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.168771 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84v6t\" (UniqueName: \"kubernetes.io/projected/296d5f6b-220d-4eda-96e4-c405190f28dc-kube-api-access-84v6t\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.168805 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-utilities\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.270876 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-catalog-content\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.270989 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84v6t\" (UniqueName: \"kubernetes.io/projected/296d5f6b-220d-4eda-96e4-c405190f28dc-kube-api-access-84v6t\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.271014 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-utilities\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.271383 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-catalog-content\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.271537 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-utilities\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.293764 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84v6t\" (UniqueName: \"kubernetes.io/projected/296d5f6b-220d-4eda-96e4-c405190f28dc-kube-api-access-84v6t\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.428941 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:45 crc kubenswrapper[5012]: I0219 06:51:45.015355 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drxrq"] Feb 19 06:51:45 crc kubenswrapper[5012]: I0219 06:51:45.606107 5012 generic.go:334] "Generic (PLEG): container finished" podID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerID="07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1" exitCode=0 Feb 19 06:51:45 crc kubenswrapper[5012]: I0219 06:51:45.606435 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drxrq" event={"ID":"296d5f6b-220d-4eda-96e4-c405190f28dc","Type":"ContainerDied","Data":"07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1"} Feb 19 06:51:45 crc kubenswrapper[5012]: I0219 06:51:45.607145 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drxrq" event={"ID":"296d5f6b-220d-4eda-96e4-c405190f28dc","Type":"ContainerStarted","Data":"6de2c1173cadccb7b874d72d1d3862009696c41fda3c7af45ebce6e2f0fa3d9c"} Feb 19 06:51:46 crc kubenswrapper[5012]: I0219 06:51:46.621852 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drxrq" event={"ID":"296d5f6b-220d-4eda-96e4-c405190f28dc","Type":"ContainerStarted","Data":"ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec"} Feb 19 06:51:48 crc kubenswrapper[5012]: I0219 06:51:48.643027 5012 generic.go:334] "Generic (PLEG): container finished" podID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerID="ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec" exitCode=0 Feb 19 06:51:48 crc kubenswrapper[5012]: I0219 06:51:48.643117 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drxrq" event={"ID":"296d5f6b-220d-4eda-96e4-c405190f28dc","Type":"ContainerDied","Data":"ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec"} Feb 19 06:51:50 crc kubenswrapper[5012]: I0219 06:51:50.666083 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drxrq" event={"ID":"296d5f6b-220d-4eda-96e4-c405190f28dc","Type":"ContainerStarted","Data":"0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c"} Feb 19 06:51:50 crc kubenswrapper[5012]: I0219 06:51:50.696744 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-drxrq" podStartSLOduration=2.433438421 podStartE2EDuration="6.696724122s" podCreationTimestamp="2026-02-19 06:51:44 +0000 UTC" firstStartedPulling="2026-02-19 06:51:45.609073364 +0000 UTC m=+5201.642395933" lastFinishedPulling="2026-02-19 06:51:49.872359065 +0000 UTC m=+5205.905681634" observedRunningTime="2026-02-19 06:51:50.691338662 +0000 UTC m=+5206.724661231" watchObservedRunningTime="2026-02-19 06:51:50.696724122 +0000 UTC m=+5206.730046691" Feb 19 06:51:54 crc kubenswrapper[5012]: I0219 06:51:54.429844 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:54 crc kubenswrapper[5012]: I0219 06:51:54.430390 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:55 crc kubenswrapper[5012]: I0219 06:51:55.481353 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-drxrq" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="registry-server" probeResult="failure" output=< Feb 19 06:51:55 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 06:51:55 crc kubenswrapper[5012]: > Feb 19 06:51:55 crc kubenswrapper[5012]: I0219 06:51:55.703072 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:51:55 crc kubenswrapper[5012]: E0219 06:51:55.703421 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:52:03 crc kubenswrapper[5012]: I0219 06:52:03.864611 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bgkfc"] Feb 19 06:52:03 crc kubenswrapper[5012]: I0219 06:52:03.867085 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:03 crc kubenswrapper[5012]: I0219 06:52:03.877571 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgkfc"] Feb 19 06:52:03 crc kubenswrapper[5012]: I0219 06:52:03.933714 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-catalog-content\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:03 crc kubenswrapper[5012]: I0219 06:52:03.933871 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfjbs\" (UniqueName: \"kubernetes.io/projected/b884b761-ae1b-4cce-b4fb-478f3c847090-kube-api-access-gfjbs\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:03 crc kubenswrapper[5012]: I0219 06:52:03.933896 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-utilities\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.036381 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-catalog-content\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.036515 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfjbs\" (UniqueName: \"kubernetes.io/projected/b884b761-ae1b-4cce-b4fb-478f3c847090-kube-api-access-gfjbs\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.036545 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-utilities\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.036952 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-catalog-content\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.037069 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-utilities\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.069528 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfjbs\" (UniqueName: \"kubernetes.io/projected/b884b761-ae1b-4cce-b4fb-478f3c847090-kube-api-access-gfjbs\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.187112 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.740464 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgkfc"] Feb 19 06:52:04 crc kubenswrapper[5012]: W0219 06:52:04.746090 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb884b761_ae1b_4cce_b4fb_478f3c847090.slice/crio-38010c4653b9ffc7e2858504be509a2397bd5c2505954ddda11f73e6e6a61c70 WatchSource:0}: Error finding container 38010c4653b9ffc7e2858504be509a2397bd5c2505954ddda11f73e6e6a61c70: Status 404 returned error can't find the container with id 38010c4653b9ffc7e2858504be509a2397bd5c2505954ddda11f73e6e6a61c70 Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.814648 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgkfc" event={"ID":"b884b761-ae1b-4cce-b4fb-478f3c847090","Type":"ContainerStarted","Data":"38010c4653b9ffc7e2858504be509a2397bd5c2505954ddda11f73e6e6a61c70"} Feb 19 06:52:05 crc kubenswrapper[5012]: I0219 06:52:05.500129 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-drxrq" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="registry-server" probeResult="failure" output=< Feb 19 06:52:05 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 06:52:05 crc kubenswrapper[5012]: > Feb 19 06:52:05 crc kubenswrapper[5012]: I0219 06:52:05.825852 5012 generic.go:334] "Generic (PLEG): container finished" podID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerID="f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362" exitCode=0 Feb 19 06:52:05 crc kubenswrapper[5012]: I0219 06:52:05.825902 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgkfc" event={"ID":"b884b761-ae1b-4cce-b4fb-478f3c847090","Type":"ContainerDied","Data":"f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362"} Feb 19 06:52:06 crc kubenswrapper[5012]: I0219 06:52:06.709026 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:52:06 crc kubenswrapper[5012]: E0219 06:52:06.709857 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:52:06 crc kubenswrapper[5012]: I0219 06:52:06.843015 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgkfc" event={"ID":"b884b761-ae1b-4cce-b4fb-478f3c847090","Type":"ContainerStarted","Data":"00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1"} Feb 19 06:52:07 crc kubenswrapper[5012]: I0219 06:52:07.858895 5012 generic.go:334] "Generic (PLEG): container finished" podID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerID="00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1" exitCode=0 Feb 19 06:52:07 crc kubenswrapper[5012]: I0219 06:52:07.858965 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgkfc" event={"ID":"b884b761-ae1b-4cce-b4fb-478f3c847090","Type":"ContainerDied","Data":"00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1"} Feb 19 06:52:08 crc kubenswrapper[5012]: I0219 06:52:08.882122 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgkfc" event={"ID":"b884b761-ae1b-4cce-b4fb-478f3c847090","Type":"ContainerStarted","Data":"b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d"} Feb 19 06:52:08 crc kubenswrapper[5012]: I0219 06:52:08.909726 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bgkfc" podStartSLOduration=3.439005719 podStartE2EDuration="5.90970394s" podCreationTimestamp="2026-02-19 06:52:03 +0000 UTC" firstStartedPulling="2026-02-19 06:52:05.828295221 +0000 UTC m=+5221.861617790" lastFinishedPulling="2026-02-19 06:52:08.298993442 +0000 UTC m=+5224.332316011" observedRunningTime="2026-02-19 06:52:08.902908016 +0000 UTC m=+5224.936230585" watchObservedRunningTime="2026-02-19 06:52:08.90970394 +0000 UTC m=+5224.943026519" Feb 19 06:52:14 crc kubenswrapper[5012]: I0219 06:52:14.188217 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:14 crc kubenswrapper[5012]: I0219 06:52:14.190037 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:14 crc kubenswrapper[5012]: I0219 06:52:14.243654 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:14 crc kubenswrapper[5012]: I0219 06:52:14.476122 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:52:14 crc kubenswrapper[5012]: I0219 06:52:14.572542 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:52:15 crc kubenswrapper[5012]: I0219 06:52:14.998475 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:15 crc kubenswrapper[5012]: I0219 06:52:15.494281 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-drxrq"] Feb 19 06:52:15 crc kubenswrapper[5012]: I0219 06:52:15.947380 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-drxrq" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="registry-server" containerID="cri-o://0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c" gracePeriod=2 Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.390655 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.440894 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-catalog-content\") pod \"296d5f6b-220d-4eda-96e4-c405190f28dc\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.440997 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84v6t\" (UniqueName: \"kubernetes.io/projected/296d5f6b-220d-4eda-96e4-c405190f28dc-kube-api-access-84v6t\") pod \"296d5f6b-220d-4eda-96e4-c405190f28dc\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.441032 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-utilities\") pod \"296d5f6b-220d-4eda-96e4-c405190f28dc\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.442201 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-utilities" (OuterVolumeSpecName: "utilities") pod "296d5f6b-220d-4eda-96e4-c405190f28dc" (UID: "296d5f6b-220d-4eda-96e4-c405190f28dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.463510 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/296d5f6b-220d-4eda-96e4-c405190f28dc-kube-api-access-84v6t" (OuterVolumeSpecName: "kube-api-access-84v6t") pod "296d5f6b-220d-4eda-96e4-c405190f28dc" (UID: "296d5f6b-220d-4eda-96e4-c405190f28dc"). InnerVolumeSpecName "kube-api-access-84v6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.543554 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84v6t\" (UniqueName: \"kubernetes.io/projected/296d5f6b-220d-4eda-96e4-c405190f28dc-kube-api-access-84v6t\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.543583 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.580389 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "296d5f6b-220d-4eda-96e4-c405190f28dc" (UID: "296d5f6b-220d-4eda-96e4-c405190f28dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.645412 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.959104 5012 generic.go:334] "Generic (PLEG): container finished" podID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerID="0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c" exitCode=0 Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.959176 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drxrq" event={"ID":"296d5f6b-220d-4eda-96e4-c405190f28dc","Type":"ContainerDied","Data":"0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c"} Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.959621 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drxrq" event={"ID":"296d5f6b-220d-4eda-96e4-c405190f28dc","Type":"ContainerDied","Data":"6de2c1173cadccb7b874d72d1d3862009696c41fda3c7af45ebce6e2f0fa3d9c"} Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.959237 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.959658 5012 scope.go:117] "RemoveContainer" containerID="0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.984095 5012 scope.go:117] "RemoveContainer" containerID="ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec" Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.001824 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-drxrq"] Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.008956 5012 scope.go:117] "RemoveContainer" containerID="07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1" Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.023083 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-drxrq"] Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.045470 5012 scope.go:117] "RemoveContainer" containerID="0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c" Feb 19 06:52:17 crc kubenswrapper[5012]: E0219 06:52:17.046014 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c\": container with ID starting with 0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c not found: ID does not exist" containerID="0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c" Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.046051 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c"} err="failed to get container status \"0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c\": rpc error: code = NotFound desc = could not find container \"0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c\": container with ID starting with 0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c not found: ID does not exist" Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.046074 5012 scope.go:117] "RemoveContainer" containerID="ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec" Feb 19 06:52:17 crc kubenswrapper[5012]: E0219 06:52:17.046659 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec\": container with ID starting with ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec not found: ID does not exist" containerID="ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec" Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.046726 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec"} err="failed to get container status \"ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec\": rpc error: code = NotFound desc = could not find container \"ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec\": container with ID starting with ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec not found: ID does not exist" Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.046775 5012 scope.go:117] "RemoveContainer" containerID="07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1" Feb 19 06:52:17 crc kubenswrapper[5012]: E0219 06:52:17.047141 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1\": container with ID starting with 07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1 not found: ID does not exist" containerID="07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1" Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.047183 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1"} err="failed to get container status \"07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1\": rpc error: code = NotFound desc = could not find container \"07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1\": container with ID starting with 07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1 not found: ID does not exist" Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.297431 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgkfc"] Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.982658 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bgkfc" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerName="registry-server" containerID="cri-o://b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d" gracePeriod=2 Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.477364 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.586222 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-catalog-content\") pod \"b884b761-ae1b-4cce-b4fb-478f3c847090\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.586469 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-utilities\") pod \"b884b761-ae1b-4cce-b4fb-478f3c847090\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.586635 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfjbs\" (UniqueName: \"kubernetes.io/projected/b884b761-ae1b-4cce-b4fb-478f3c847090-kube-api-access-gfjbs\") pod \"b884b761-ae1b-4cce-b4fb-478f3c847090\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.587388 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-utilities" (OuterVolumeSpecName: "utilities") pod "b884b761-ae1b-4cce-b4fb-478f3c847090" (UID: "b884b761-ae1b-4cce-b4fb-478f3c847090"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.594405 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b884b761-ae1b-4cce-b4fb-478f3c847090-kube-api-access-gfjbs" (OuterVolumeSpecName: "kube-api-access-gfjbs") pod "b884b761-ae1b-4cce-b4fb-478f3c847090" (UID: "b884b761-ae1b-4cce-b4fb-478f3c847090"). InnerVolumeSpecName "kube-api-access-gfjbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.620323 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b884b761-ae1b-4cce-b4fb-478f3c847090" (UID: "b884b761-ae1b-4cce-b4fb-478f3c847090"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.689276 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.689323 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.689334 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfjbs\" (UniqueName: \"kubernetes.io/projected/b884b761-ae1b-4cce-b4fb-478f3c847090-kube-api-access-gfjbs\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.715465 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" path="/var/lib/kubelet/pods/296d5f6b-220d-4eda-96e4-c405190f28dc/volumes" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.009027 5012 generic.go:334] "Generic (PLEG): container finished" podID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerID="b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d" exitCode=0 Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.009108 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgkfc" event={"ID":"b884b761-ae1b-4cce-b4fb-478f3c847090","Type":"ContainerDied","Data":"b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d"} Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.009154 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgkfc" event={"ID":"b884b761-ae1b-4cce-b4fb-478f3c847090","Type":"ContainerDied","Data":"38010c4653b9ffc7e2858504be509a2397bd5c2505954ddda11f73e6e6a61c70"} Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.009199 5012 scope.go:117] "RemoveContainer" containerID="b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.009706 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.037081 5012 scope.go:117] "RemoveContainer" containerID="00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.064097 5012 scope.go:117] "RemoveContainer" containerID="f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.071315 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgkfc"] Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.082266 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgkfc"] Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.109659 5012 scope.go:117] "RemoveContainer" containerID="b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d" Feb 19 06:52:19 crc kubenswrapper[5012]: E0219 06:52:19.116439 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d\": container with ID starting with b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d not found: ID does not exist" containerID="b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.116487 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d"} err="failed to get container status \"b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d\": rpc error: code = NotFound desc = could not find container \"b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d\": container with ID starting with b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d not found: ID does not exist" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.116514 5012 scope.go:117] "RemoveContainer" containerID="00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1" Feb 19 06:52:19 crc kubenswrapper[5012]: E0219 06:52:19.116996 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1\": container with ID starting with 00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1 not found: ID does not exist" containerID="00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.117034 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1"} err="failed to get container status \"00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1\": rpc error: code = NotFound desc = could not find container \"00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1\": container with ID starting with 00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1 not found: ID does not exist" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.117058 5012 scope.go:117] "RemoveContainer" containerID="f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362" Feb 19 06:52:19 crc kubenswrapper[5012]: E0219 06:52:19.117431 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362\": container with ID starting with f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362 not found: ID does not exist" containerID="f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.117456 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362"} err="failed to get container status \"f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362\": rpc error: code = NotFound desc = could not find container \"f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362\": container with ID starting with f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362 not found: ID does not exist" Feb 19 06:52:20 crc kubenswrapper[5012]: I0219 06:52:20.704184 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:52:20 crc kubenswrapper[5012]: I0219 06:52:20.730316 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" path="/var/lib/kubelet/pods/b884b761-ae1b-4cce-b4fb-478f3c847090/volumes" Feb 19 06:52:21 crc kubenswrapper[5012]: I0219 06:52:21.035598 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"48054e4bb9edb7bcb5d43f31e62ca380e81f64c675e1f2cd4a65b9f2238ff941"} Feb 19 06:52:25 crc kubenswrapper[5012]: I0219 06:52:25.070107 5012 generic.go:334] "Generic (PLEG): container finished" podID="809fe06a-5a2d-4ac8-90d0-5a2569f3e116" containerID="eee8e3c869cd66a7f0fdd02a2d1a9b68c170e21d37668f164d794773d0198ed5" exitCode=0 Feb 19 06:52:25 crc kubenswrapper[5012]: I0219 06:52:25.070148 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/crc-debug-57vjb" event={"ID":"809fe06a-5a2d-4ac8-90d0-5a2569f3e116","Type":"ContainerDied","Data":"eee8e3c869cd66a7f0fdd02a2d1a9b68c170e21d37668f164d794773d0198ed5"} Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.223272 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.248854 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjgqb\" (UniqueName: \"kubernetes.io/projected/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-kube-api-access-sjgqb\") pod \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\" (UID: \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\") " Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.249519 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-host\") pod \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\" (UID: \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\") " Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.250021 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-host" (OuterVolumeSpecName: "host") pod "809fe06a-5a2d-4ac8-90d0-5a2569f3e116" (UID: "809fe06a-5a2d-4ac8-90d0-5a2569f3e116"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.261353 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-kube-api-access-sjgqb" (OuterVolumeSpecName: "kube-api-access-sjgqb") pod "809fe06a-5a2d-4ac8-90d0-5a2569f3e116" (UID: "809fe06a-5a2d-4ac8-90d0-5a2569f3e116"). InnerVolumeSpecName "kube-api-access-sjgqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.271099 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hncx9/crc-debug-57vjb"] Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.279688 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hncx9/crc-debug-57vjb"] Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.352414 5012 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-host\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.352445 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjgqb\" (UniqueName: \"kubernetes.io/projected/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-kube-api-access-sjgqb\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.717202 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="809fe06a-5a2d-4ac8-90d0-5a2569f3e116" path="/var/lib/kubelet/pods/809fe06a-5a2d-4ac8-90d0-5a2569f3e116/volumes" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.102254 5012 scope.go:117] "RemoveContainer" containerID="eee8e3c869cd66a7f0fdd02a2d1a9b68c170e21d37668f164d794773d0198ed5" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.102389 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.515376 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hncx9/crc-debug-nxgm9"] Feb 19 06:52:27 crc kubenswrapper[5012]: E0219 06:52:27.515967 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerName="extract-content" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.515988 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerName="extract-content" Feb 19 06:52:27 crc kubenswrapper[5012]: E0219 06:52:27.516014 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="extract-utilities" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516027 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="extract-utilities" Feb 19 06:52:27 crc kubenswrapper[5012]: E0219 06:52:27.516043 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerName="extract-utilities" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516056 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerName="extract-utilities" Feb 19 06:52:27 crc kubenswrapper[5012]: E0219 06:52:27.516100 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerName="registry-server" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516112 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerName="registry-server" Feb 19 06:52:27 crc kubenswrapper[5012]: E0219 06:52:27.516148 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="extract-content" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516160 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="extract-content" Feb 19 06:52:27 crc kubenswrapper[5012]: E0219 06:52:27.516180 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="registry-server" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516192 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="registry-server" Feb 19 06:52:27 crc kubenswrapper[5012]: E0219 06:52:27.516221 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809fe06a-5a2d-4ac8-90d0-5a2569f3e116" containerName="container-00" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516257 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="809fe06a-5a2d-4ac8-90d0-5a2569f3e116" containerName="container-00" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516632 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerName="registry-server" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516655 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="registry-server" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516687 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="809fe06a-5a2d-4ac8-90d0-5a2569f3e116" containerName="container-00" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.517730 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.578585 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhl5r\" (UniqueName: \"kubernetes.io/projected/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-kube-api-access-xhl5r\") pod \"crc-debug-nxgm9\" (UID: \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\") " pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.578671 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-host\") pod \"crc-debug-nxgm9\" (UID: \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\") " pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.682049 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhl5r\" (UniqueName: \"kubernetes.io/projected/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-kube-api-access-xhl5r\") pod \"crc-debug-nxgm9\" (UID: \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\") " pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.683250 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-host\") pod \"crc-debug-nxgm9\" (UID: \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\") " pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.683438 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-host\") pod \"crc-debug-nxgm9\" (UID: \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\") " pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.715037 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhl5r\" (UniqueName: \"kubernetes.io/projected/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-kube-api-access-xhl5r\") pod \"crc-debug-nxgm9\" (UID: \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\") " pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.850190 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:28 crc kubenswrapper[5012]: I0219 06:52:28.112364 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/crc-debug-nxgm9" event={"ID":"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0","Type":"ContainerStarted","Data":"3cefb9b619fd38672097387c5f39889450729554bf7d24dcd7f7df0f70d0fe02"} Feb 19 06:52:29 crc kubenswrapper[5012]: I0219 06:52:29.123487 5012 generic.go:334] "Generic (PLEG): container finished" podID="4f23c1e7-2ab4-499a-b925-f11ed5aff7d0" containerID="5f63a56f4608620beb3ed89096dc42c009d98d6c2c1d04eec01e0fcc60d308fd" exitCode=0 Feb 19 06:52:29 crc kubenswrapper[5012]: I0219 06:52:29.123555 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/crc-debug-nxgm9" event={"ID":"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0","Type":"ContainerDied","Data":"5f63a56f4608620beb3ed89096dc42c009d98d6c2c1d04eec01e0fcc60d308fd"} Feb 19 06:52:30 crc kubenswrapper[5012]: I0219 06:52:30.336564 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:30 crc kubenswrapper[5012]: I0219 06:52:30.450381 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-host\") pod \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\" (UID: \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\") " Feb 19 06:52:30 crc kubenswrapper[5012]: I0219 06:52:30.450464 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhl5r\" (UniqueName: \"kubernetes.io/projected/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-kube-api-access-xhl5r\") pod \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\" (UID: \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\") " Feb 19 06:52:30 crc kubenswrapper[5012]: I0219 06:52:30.450836 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-host" (OuterVolumeSpecName: "host") pod "4f23c1e7-2ab4-499a-b925-f11ed5aff7d0" (UID: "4f23c1e7-2ab4-499a-b925-f11ed5aff7d0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 06:52:30 crc kubenswrapper[5012]: I0219 06:52:30.465550 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-kube-api-access-xhl5r" (OuterVolumeSpecName: "kube-api-access-xhl5r") pod "4f23c1e7-2ab4-499a-b925-f11ed5aff7d0" (UID: "4f23c1e7-2ab4-499a-b925-f11ed5aff7d0"). InnerVolumeSpecName "kube-api-access-xhl5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:52:30 crc kubenswrapper[5012]: I0219 06:52:30.552215 5012 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-host\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:30 crc kubenswrapper[5012]: I0219 06:52:30.552249 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhl5r\" (UniqueName: \"kubernetes.io/projected/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-kube-api-access-xhl5r\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:31 crc kubenswrapper[5012]: I0219 06:52:31.145346 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/crc-debug-nxgm9" event={"ID":"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0","Type":"ContainerDied","Data":"3cefb9b619fd38672097387c5f39889450729554bf7d24dcd7f7df0f70d0fe02"} Feb 19 06:52:31 crc kubenswrapper[5012]: I0219 06:52:31.145643 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cefb9b619fd38672097387c5f39889450729554bf7d24dcd7f7df0f70d0fe02" Feb 19 06:52:31 crc kubenswrapper[5012]: I0219 06:52:31.145453 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:31 crc kubenswrapper[5012]: I0219 06:52:31.495342 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hncx9/crc-debug-nxgm9"] Feb 19 06:52:31 crc kubenswrapper[5012]: I0219 06:52:31.506185 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hncx9/crc-debug-nxgm9"] Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.723284 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f23c1e7-2ab4-499a-b925-f11ed5aff7d0" path="/var/lib/kubelet/pods/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0/volumes" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.724186 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hncx9/crc-debug-9hbl5"] Feb 19 06:52:32 crc kubenswrapper[5012]: E0219 06:52:32.724676 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f23c1e7-2ab4-499a-b925-f11ed5aff7d0" containerName="container-00" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.724702 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f23c1e7-2ab4-499a-b925-f11ed5aff7d0" containerName="container-00" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.724978 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f23c1e7-2ab4-499a-b925-f11ed5aff7d0" containerName="container-00" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.726440 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.800853 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxm25\" (UniqueName: \"kubernetes.io/projected/2e79e07d-bc20-4488-8ebe-4805bf39854e-kube-api-access-xxm25\") pod \"crc-debug-9hbl5\" (UID: \"2e79e07d-bc20-4488-8ebe-4805bf39854e\") " pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.800932 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e79e07d-bc20-4488-8ebe-4805bf39854e-host\") pod \"crc-debug-9hbl5\" (UID: \"2e79e07d-bc20-4488-8ebe-4805bf39854e\") " pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.903031 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxm25\" (UniqueName: \"kubernetes.io/projected/2e79e07d-bc20-4488-8ebe-4805bf39854e-kube-api-access-xxm25\") pod \"crc-debug-9hbl5\" (UID: \"2e79e07d-bc20-4488-8ebe-4805bf39854e\") " pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.903386 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e79e07d-bc20-4488-8ebe-4805bf39854e-host\") pod \"crc-debug-9hbl5\" (UID: \"2e79e07d-bc20-4488-8ebe-4805bf39854e\") " pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.903556 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e79e07d-bc20-4488-8ebe-4805bf39854e-host\") pod \"crc-debug-9hbl5\" (UID: \"2e79e07d-bc20-4488-8ebe-4805bf39854e\") " pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.929889 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxm25\" (UniqueName: \"kubernetes.io/projected/2e79e07d-bc20-4488-8ebe-4805bf39854e-kube-api-access-xxm25\") pod \"crc-debug-9hbl5\" (UID: \"2e79e07d-bc20-4488-8ebe-4805bf39854e\") " pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:33 crc kubenswrapper[5012]: I0219 06:52:33.053279 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:33 crc kubenswrapper[5012]: W0219 06:52:33.114572 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e79e07d_bc20_4488_8ebe_4805bf39854e.slice/crio-f25c286c5c0d69eed91d8be86c8ad827a202b08882a99cf863524f0a82aeb1c5 WatchSource:0}: Error finding container f25c286c5c0d69eed91d8be86c8ad827a202b08882a99cf863524f0a82aeb1c5: Status 404 returned error can't find the container with id f25c286c5c0d69eed91d8be86c8ad827a202b08882a99cf863524f0a82aeb1c5 Feb 19 06:52:33 crc kubenswrapper[5012]: I0219 06:52:33.169132 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/crc-debug-9hbl5" event={"ID":"2e79e07d-bc20-4488-8ebe-4805bf39854e","Type":"ContainerStarted","Data":"f25c286c5c0d69eed91d8be86c8ad827a202b08882a99cf863524f0a82aeb1c5"} Feb 19 06:52:34 crc kubenswrapper[5012]: I0219 06:52:34.189655 5012 generic.go:334] "Generic (PLEG): container finished" podID="2e79e07d-bc20-4488-8ebe-4805bf39854e" containerID="c71bf472cad9749e4de982b5b389c0bde1476a8315392a2d6a49a409e617e7e5" exitCode=0 Feb 19 06:52:34 crc kubenswrapper[5012]: I0219 06:52:34.189776 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/crc-debug-9hbl5" event={"ID":"2e79e07d-bc20-4488-8ebe-4805bf39854e","Type":"ContainerDied","Data":"c71bf472cad9749e4de982b5b389c0bde1476a8315392a2d6a49a409e617e7e5"} Feb 19 06:52:34 crc kubenswrapper[5012]: I0219 06:52:34.254522 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hncx9/crc-debug-9hbl5"] Feb 19 06:52:34 crc kubenswrapper[5012]: I0219 06:52:34.269356 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hncx9/crc-debug-9hbl5"] Feb 19 06:52:35 crc kubenswrapper[5012]: I0219 06:52:35.344382 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:35 crc kubenswrapper[5012]: I0219 06:52:35.370065 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e79e07d-bc20-4488-8ebe-4805bf39854e-host" (OuterVolumeSpecName: "host") pod "2e79e07d-bc20-4488-8ebe-4805bf39854e" (UID: "2e79e07d-bc20-4488-8ebe-4805bf39854e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 06:52:35 crc kubenswrapper[5012]: I0219 06:52:35.370335 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e79e07d-bc20-4488-8ebe-4805bf39854e-host\") pod \"2e79e07d-bc20-4488-8ebe-4805bf39854e\" (UID: \"2e79e07d-bc20-4488-8ebe-4805bf39854e\") " Feb 19 06:52:35 crc kubenswrapper[5012]: I0219 06:52:35.370435 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxm25\" (UniqueName: \"kubernetes.io/projected/2e79e07d-bc20-4488-8ebe-4805bf39854e-kube-api-access-xxm25\") pod \"2e79e07d-bc20-4488-8ebe-4805bf39854e\" (UID: \"2e79e07d-bc20-4488-8ebe-4805bf39854e\") " Feb 19 06:52:35 crc kubenswrapper[5012]: I0219 06:52:35.371486 5012 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e79e07d-bc20-4488-8ebe-4805bf39854e-host\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:35 crc kubenswrapper[5012]: I0219 06:52:35.380468 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e79e07d-bc20-4488-8ebe-4805bf39854e-kube-api-access-xxm25" (OuterVolumeSpecName: "kube-api-access-xxm25") pod "2e79e07d-bc20-4488-8ebe-4805bf39854e" (UID: "2e79e07d-bc20-4488-8ebe-4805bf39854e"). InnerVolumeSpecName "kube-api-access-xxm25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:52:35 crc kubenswrapper[5012]: I0219 06:52:35.473427 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxm25\" (UniqueName: \"kubernetes.io/projected/2e79e07d-bc20-4488-8ebe-4805bf39854e-kube-api-access-xxm25\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:36 crc kubenswrapper[5012]: I0219 06:52:36.216821 5012 scope.go:117] "RemoveContainer" containerID="c71bf472cad9749e4de982b5b389c0bde1476a8315392a2d6a49a409e617e7e5" Feb 19 06:52:36 crc kubenswrapper[5012]: I0219 06:52:36.216881 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:36 crc kubenswrapper[5012]: I0219 06:52:36.714156 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e79e07d-bc20-4488-8ebe-4805bf39854e" path="/var/lib/kubelet/pods/2e79e07d-bc20-4488-8ebe-4805bf39854e/volumes" Feb 19 06:53:14 crc kubenswrapper[5012]: I0219 06:53:14.981108 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f669f7d76-2qg4s_875bbaf1-6c43-4474-9f7b-8202b2d5ee1c/barbican-api/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.103363 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f669f7d76-2qg4s_875bbaf1-6c43-4474-9f7b-8202b2d5ee1c/barbican-api-log/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.187371 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bb75756b-hd4xs_ee216ad2-2baf-4bba-a3fe-81acf9218af0/barbican-keystone-listener/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.345322 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bb75756b-hd4xs_ee216ad2-2baf-4bba-a3fe-81acf9218af0/barbican-keystone-listener-log/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.415601 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-779bfc8b79-ffj7v_9133f0f1-2d9e-462e-ba56-8a206f61bd03/barbican-worker/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.511767 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-779bfc8b79-ffj7v_9133f0f1-2d9e-462e-ba56-8a206f61bd03/barbican-worker-log/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.624113 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb_ebf47868-aec9-4f2e-8c08-499161f45b18/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.809100 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9647feae-5291-41e1-9bb4-631f661552b9/ceilometer-central-agent/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.922646 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9647feae-5291-41e1-9bb4-631f661552b9/proxy-httpd/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.923744 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9647feae-5291-41e1-9bb4-631f661552b9/ceilometer-notification-agent/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.928264 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9647feae-5291-41e1-9bb4-631f661552b9/sg-core/0.log" Feb 19 06:53:16 crc kubenswrapper[5012]: I0219 06:53:16.127610 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4c548edc-6755-4310-9b8d-780a384ec6bd/cinder-api-log/0.log" Feb 19 06:53:16 crc kubenswrapper[5012]: I0219 06:53:16.332126 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_42946b07-c256-43a7-99d0-45f94c019663/cinder-scheduler/0.log" Feb 19 06:53:16 crc kubenswrapper[5012]: I0219 06:53:16.344340 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4c548edc-6755-4310-9b8d-780a384ec6bd/cinder-api/0.log" Feb 19 06:53:16 crc kubenswrapper[5012]: I0219 06:53:16.388781 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_42946b07-c256-43a7-99d0-45f94c019663/probe/0.log" Feb 19 06:53:16 crc kubenswrapper[5012]: I0219 06:53:16.556182 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8sh74_a37d4335-7c06-4fa3-af51-6cfe6fb9a020/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:16 crc kubenswrapper[5012]: I0219 06:53:16.669886 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-bg5db_8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:16 crc kubenswrapper[5012]: I0219 06:53:16.800866 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-567c7bc999-cgf2v_c2eab861-ab13-4ab1-b57f-fecf9e95b9be/init/0.log" Feb 19 06:53:16 crc kubenswrapper[5012]: I0219 06:53:16.968680 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-567c7bc999-cgf2v_c2eab861-ab13-4ab1-b57f-fecf9e95b9be/init/0.log" Feb 19 06:53:17 crc kubenswrapper[5012]: I0219 06:53:17.080146 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-l597r_02358307-dba6-44fa-9799-2440b1496c55/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:17 crc kubenswrapper[5012]: I0219 06:53:17.130207 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-567c7bc999-cgf2v_c2eab861-ab13-4ab1-b57f-fecf9e95b9be/dnsmasq-dns/0.log" Feb 19 06:53:17 crc kubenswrapper[5012]: I0219 06:53:17.283788 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8cfddc12-1c4c-4faf-9edb-71fb80608785/glance-httpd/0.log" Feb 19 06:53:17 crc kubenswrapper[5012]: I0219 06:53:17.321519 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8cfddc12-1c4c-4faf-9edb-71fb80608785/glance-log/0.log" Feb 19 06:53:17 crc kubenswrapper[5012]: I0219 06:53:17.451409 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f55309b7-09e5-4496-8995-f03681386729/glance-httpd/0.log" Feb 19 06:53:17 crc kubenswrapper[5012]: I0219 06:53:17.480811 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f55309b7-09e5-4496-8995-f03681386729/glance-log/0.log" Feb 19 06:53:17 crc kubenswrapper[5012]: I0219 06:53:17.769098 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cdcb467fb-8tvnz_6c937bbe-f068-4e5b-81ad-9455104062da/horizon/0.log" Feb 19 06:53:17 crc kubenswrapper[5012]: I0219 06:53:17.810988 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5_d869003b-7b03-4a8b-9f9c-73ca0ec4f359/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:18 crc kubenswrapper[5012]: I0219 06:53:18.096948 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-kjhk7_0037b322-99bb-4ae2-aba4-85ddcd8243ae/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:18 crc kubenswrapper[5012]: I0219 06:53:18.248104 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cdcb467fb-8tvnz_6c937bbe-f068-4e5b-81ad-9455104062da/horizon-log/0.log" Feb 19 06:53:18 crc kubenswrapper[5012]: I0219 06:53:18.358907 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29524681-x9bcr_86c7e36d-88e3-432a-ad6f-74de626c5f30/keystone-cron/0.log" Feb 19 06:53:18 crc kubenswrapper[5012]: I0219 06:53:18.592270 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_cc79bf66-4a34-43fe-ad03-4e6ce60d2c44/kube-state-metrics/0.log" Feb 19 06:53:18 crc kubenswrapper[5012]: I0219 06:53:18.680777 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7b574779c9-x2bsv_0e0a6a9f-d11f-4084-9742-7780b20fae75/keystone-api/0.log" Feb 19 06:53:18 crc kubenswrapper[5012]: I0219 06:53:18.759015 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-2n79s_fcace677-35b0-499f-998c-99168fbfa0af/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:19 crc kubenswrapper[5012]: I0219 06:53:19.186504 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2_534720dc-6ff8-4fdc-9337-6fe77ad1eaa8/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:19 crc kubenswrapper[5012]: I0219 06:53:19.277119 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5ff88b6c7c-5bg66_eb805277-3dfc-4810-9845-3ba928d262c2/neutron-httpd/0.log" Feb 19 06:53:19 crc kubenswrapper[5012]: I0219 06:53:19.287868 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5ff88b6c7c-5bg66_eb805277-3dfc-4810-9845-3ba928d262c2/neutron-api/0.log" Feb 19 06:53:19 crc kubenswrapper[5012]: I0219 06:53:19.434242 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_3c628866-f96d-4e7b-8846-7073c98dd389/setup-container/0.log" Feb 19 06:53:19 crc kubenswrapper[5012]: I0219 06:53:19.637831 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_3c628866-f96d-4e7b-8846-7073c98dd389/rabbitmq/0.log" Feb 19 06:53:19 crc kubenswrapper[5012]: I0219 06:53:19.665593 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_3c628866-f96d-4e7b-8846-7073c98dd389/setup-container/0.log" Feb 19 06:53:20 crc kubenswrapper[5012]: I0219 06:53:20.658143 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_6852caab-c1b6-40cd-b5df-88d22f6016bd/nova-cell0-conductor-conductor/0.log" Feb 19 06:53:20 crc kubenswrapper[5012]: I0219 06:53:20.812434 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_aceef718-9d1c-441d-bf1b-92c0a6831def/nova-cell1-conductor-conductor/0.log" Feb 19 06:53:21 crc kubenswrapper[5012]: I0219 06:53:21.023188 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c1a529b0-65f7-4680-a4fd-4dacebc1ab83/nova-api-log/0.log" Feb 19 06:53:21 crc kubenswrapper[5012]: I0219 06:53:21.117040 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_661e04e4-4ba2-4ea0-9ba6-3af2949e7e21/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 06:53:21 crc kubenswrapper[5012]: I0219 06:53:21.303967 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-p67w4_a6116441-2985-4723-9889-6c3422159243/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:21 crc kubenswrapper[5012]: I0219 06:53:21.340803 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c1a529b0-65f7-4680-a4fd-4dacebc1ab83/nova-api-api/0.log" Feb 19 06:53:21 crc kubenswrapper[5012]: I0219 06:53:21.414597 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_396b18f9-9859-4b42-aca1-c29c3724c86c/nova-metadata-log/0.log" Feb 19 06:53:21 crc kubenswrapper[5012]: I0219 06:53:21.883210 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0/nova-scheduler-scheduler/0.log" Feb 19 06:53:22 crc kubenswrapper[5012]: I0219 06:53:22.182685 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_04466d10-2177-4361-bd86-333c046b9e52/mysql-bootstrap/0.log" Feb 19 06:53:22 crc kubenswrapper[5012]: I0219 06:53:22.421649 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_04466d10-2177-4361-bd86-333c046b9e52/galera/0.log" Feb 19 06:53:22 crc kubenswrapper[5012]: I0219 06:53:22.424100 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_04466d10-2177-4361-bd86-333c046b9e52/mysql-bootstrap/0.log" Feb 19 06:53:22 crc kubenswrapper[5012]: I0219 06:53:22.648669 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1fd0c672-e258-4feb-8bbd-26135f92f7fb/mysql-bootstrap/0.log" Feb 19 06:53:22 crc kubenswrapper[5012]: I0219 06:53:22.840000 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1fd0c672-e258-4feb-8bbd-26135f92f7fb/mysql-bootstrap/0.log" Feb 19 06:53:22 crc kubenswrapper[5012]: I0219 06:53:22.919774 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1fd0c672-e258-4feb-8bbd-26135f92f7fb/galera/0.log" Feb 19 06:53:23 crc kubenswrapper[5012]: I0219 06:53:23.002005 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_75258dbe-c223-4e55-92a6-8e588745294a/openstackclient/0.log" Feb 19 06:53:23 crc kubenswrapper[5012]: I0219 06:53:23.152501 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cr94m_e2c9ac17-43ef-4ccb-83b1-e20ee03289de/ovn-controller/0.log" Feb 19 06:53:23 crc kubenswrapper[5012]: I0219 06:53:23.375588 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mz9j9_c711491e-0b8b-4737-88c9-bc5e37051ac1/openstack-network-exporter/0.log" Feb 19 06:53:23 crc kubenswrapper[5012]: I0219 06:53:23.499942 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_396b18f9-9859-4b42-aca1-c29c3724c86c/nova-metadata-metadata/0.log" Feb 19 06:53:23 crc kubenswrapper[5012]: I0219 06:53:23.527482 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7qdpg_16fbaba1-bd32-4121-8743-99422db74180/ovsdb-server-init/0.log" Feb 19 06:53:23 crc kubenswrapper[5012]: I0219 06:53:23.748583 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7qdpg_16fbaba1-bd32-4121-8743-99422db74180/ovsdb-server-init/0.log" Feb 19 06:53:23 crc kubenswrapper[5012]: I0219 06:53:23.795624 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7qdpg_16fbaba1-bd32-4121-8743-99422db74180/ovsdb-server/0.log" Feb 19 06:53:23 crc kubenswrapper[5012]: I0219 06:53:23.992767 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-gxxmx_7335769e-5b13-4d1b-8aa7-e7f192ee9e2b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.050524 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e3e8f67d-0748-4bff-b7c5-8432c7e4ab64/openstack-network-exporter/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.150441 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7qdpg_16fbaba1-bd32-4121-8743-99422db74180/ovs-vswitchd/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.189469 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e3e8f67d-0748-4bff-b7c5-8432c7e4ab64/ovn-northd/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.336545 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5a9e6735-4159-4248-a8f5-6714d386901a/openstack-network-exporter/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.377909 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5a9e6735-4159-4248-a8f5-6714d386901a/ovsdbserver-nb/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.490041 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_00790bd0-5fbb-4927-8361-085c9691c171/openstack-network-exporter/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.558059 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_00790bd0-5fbb-4927-8361-085c9691c171/ovsdbserver-sb/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.852898 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f94997dd8-cvnfv_b0ce1e0a-4e51-408c-b3f8-500cf6476b96/placement-api/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.885651 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/init-config-reloader/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.027954 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f94997dd8-cvnfv_b0ce1e0a-4e51-408c-b3f8-500cf6476b96/placement-log/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.075719 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/config-reloader/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.081742 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/prometheus/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.098880 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/init-config-reloader/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.296662 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/thanos-sidecar/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.306397 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4984f0c1-33e8-4506-b6d7-e554dca0e4c8/setup-container/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.530397 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4984f0c1-33e8-4506-b6d7-e554dca0e4c8/setup-container/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.629940 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c3230f97-dbe4-42a2-b009-a8370c601e78/setup-container/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.636655 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4984f0c1-33e8-4506-b6d7-e554dca0e4c8/rabbitmq/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.889717 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c3230f97-dbe4-42a2-b009-a8370c601e78/setup-container/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.901842 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c3230f97-dbe4-42a2-b009-a8370c601e78/rabbitmq/0.log" Feb 19 06:53:26 crc kubenswrapper[5012]: I0219 06:53:26.011385 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs_464de984-0dd6-4c4d-aed3-afbf84e0cdcf/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:26 crc kubenswrapper[5012]: I0219 06:53:26.231006 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-skvzd_07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:26 crc kubenswrapper[5012]: I0219 06:53:26.322494 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-pl267_61bd41ab-cfea-4df2-9be0-8321c6c11ebd/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:26 crc kubenswrapper[5012]: I0219 06:53:26.500031 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7xnxl_86b984ed-bd52-4348-9415-dccff4a0e1a4/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:26 crc kubenswrapper[5012]: I0219 06:53:26.586874 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9rlns_f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc/ssh-known-hosts-edpm-deployment/0.log" Feb 19 06:53:26 crc kubenswrapper[5012]: I0219 06:53:26.852554 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-59bfbf7475-v98h9_4c9aa274-240d-4d50-b38a-754dd493f351/proxy-server/0.log" Feb 19 06:53:26 crc kubenswrapper[5012]: I0219 06:53:26.987931 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-59bfbf7475-v98h9_4c9aa274-240d-4d50-b38a-754dd493f351/proxy-httpd/0.log" Feb 19 06:53:26 crc kubenswrapper[5012]: I0219 06:53:26.993997 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-5vxhd_d05da3bc-6c22-4956-9fab-331eed79d175/swift-ring-rebalance/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.287623 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/account-replicator/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.423163 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/account-auditor/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.441140 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/account-reaper/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.609618 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/account-server/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.609855 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/container-auditor/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.671205 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/container-replicator/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.706596 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/container-server/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.863468 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-auditor/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.869165 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/container-updater/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.925049 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-expirer/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.977721 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-replicator/0.log" Feb 19 06:53:28 crc kubenswrapper[5012]: I0219 06:53:28.071430 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-server/0.log" Feb 19 06:53:28 crc kubenswrapper[5012]: I0219 06:53:28.074700 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-updater/0.log" Feb 19 06:53:28 crc kubenswrapper[5012]: I0219 06:53:28.162895 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/rsync/0.log" Feb 19 06:53:28 crc kubenswrapper[5012]: I0219 06:53:28.219446 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/swift-recon-cron/0.log" Feb 19 06:53:28 crc kubenswrapper[5012]: I0219 06:53:28.407858 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx_73fe066f-3ee6-4ffc-aeb4-874c14fb0b84/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:28 crc kubenswrapper[5012]: I0219 06:53:28.432170 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_54eccb09-b3ec-45bc-8065-4c5eb9516257/tempest-tests-tempest-tests-runner/0.log" Feb 19 06:53:28 crc kubenswrapper[5012]: I0219 06:53:28.628314 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_78c125a8-bf69-4524-9b70-be9fe9f313e7/test-operator-logs-container/0.log" Feb 19 06:53:28 crc kubenswrapper[5012]: I0219 06:53:28.687956 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6_cdccd552-e703-4d8d-86b4-ff481671527f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:29 crc kubenswrapper[5012]: I0219 06:53:29.421742 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_d4778529-f7d0-482b-bd67-003aaa7ca0ae/watcher-applier/0.log" Feb 19 06:53:30 crc kubenswrapper[5012]: I0219 06:53:30.124101 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7d74d5de-7e1d-47cc-8aaa-cb303332a03a/watcher-api-log/0.log" Feb 19 06:53:32 crc kubenswrapper[5012]: I0219 06:53:32.602141 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_f87036fc-fa94-4038-8b65-bb85d8ff6f10/watcher-decision-engine/0.log" Feb 19 06:53:33 crc kubenswrapper[5012]: I0219 06:53:33.825595 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7d74d5de-7e1d-47cc-8aaa-cb303332a03a/watcher-api/0.log" Feb 19 06:53:42 crc kubenswrapper[5012]: I0219 06:53:42.887204 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_38a4a51f-c380-48fc-8f0e-cdd1ea09fa53/memcached/0.log" Feb 19 06:54:03 crc kubenswrapper[5012]: I0219 06:54:03.428471 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/util/0.log" Feb 19 06:54:03 crc kubenswrapper[5012]: I0219 06:54:03.560539 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/util/0.log" Feb 19 06:54:03 crc kubenswrapper[5012]: I0219 06:54:03.590518 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/pull/0.log" Feb 19 06:54:03 crc kubenswrapper[5012]: I0219 06:54:03.594704 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/pull/0.log" Feb 19 06:54:03 crc kubenswrapper[5012]: I0219 06:54:03.767887 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/pull/0.log" Feb 19 06:54:03 crc kubenswrapper[5012]: I0219 06:54:03.805660 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/extract/0.log" Feb 19 06:54:03 crc kubenswrapper[5012]: I0219 06:54:03.818998 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/util/0.log" Feb 19 06:54:04 crc kubenswrapper[5012]: I0219 06:54:04.310101 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-kt4nw_11d49fcd-6e31-47e5-84a1-c6ae972e13cb/manager/0.log" Feb 19 06:54:04 crc kubenswrapper[5012]: I0219 06:54:04.704021 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-qzq7x_8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be/manager/0.log" Feb 19 06:54:04 crc kubenswrapper[5012]: I0219 06:54:04.770914 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-5szxp_bfca307c-9b00-4c12-bdd6-a394b7cc7cfd/manager/0.log" Feb 19 06:54:05 crc kubenswrapper[5012]: I0219 06:54:05.024250 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-csct6_4f281b5b-b656-4d4a-b628-d4bfe4fc94f9/manager/0.log" Feb 19 06:54:05 crc kubenswrapper[5012]: I0219 06:54:05.560812 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-dgldv_8629b5e4-e6a8-4c47-b76b-f58a26b42912/manager/0.log" Feb 19 06:54:05 crc kubenswrapper[5012]: I0219 06:54:05.913216 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-cp8kx_996bfd61-486b-432d-9e09-d3a90ff9124c/manager/0.log" Feb 19 06:54:06 crc kubenswrapper[5012]: I0219 06:54:06.073351 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-9zkvx_dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43/manager/0.log" Feb 19 06:54:06 crc kubenswrapper[5012]: I0219 06:54:06.294987 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-ldrx5_e9e07b56-2724-4046-8a60-81b751fb0588/manager/0.log" Feb 19 06:54:06 crc kubenswrapper[5012]: I0219 06:54:06.509723 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-rpbt8_1e872b11-03d6-4d3f-8e06-e10e1e73d917/manager/0.log" Feb 19 06:54:06 crc kubenswrapper[5012]: I0219 06:54:06.689642 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-556xv_8af03a54-ad7a-4684-b5a6-ba83f410e6ed/manager/0.log" Feb 19 06:54:06 crc kubenswrapper[5012]: I0219 06:54:06.857092 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-27hfc_b123191d-e55b-4ddc-90ea-abcb34c97be2/manager/0.log" Feb 19 06:54:07 crc kubenswrapper[5012]: I0219 06:54:07.027043 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-l65c5_457202a7-ae9f-4d06-8690-d220e532b305/manager/0.log" Feb 19 06:54:07 crc kubenswrapper[5012]: I0219 06:54:07.243163 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4_d6eb3922-90e6-4bb1-8caa-aac6b69c76b0/manager/0.log" Feb 19 06:54:07 crc kubenswrapper[5012]: I0219 06:54:07.506623 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6679bf9b57-q57bk_76b34ac4-96f1-4bbc-9969-eb3e1cfc2159/operator/0.log" Feb 19 06:54:07 crc kubenswrapper[5012]: I0219 06:54:07.836279 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cl447_797c14cf-1b4d-4b4e-9dc5-4843e2e77cef/registry-server/0.log" Feb 19 06:54:08 crc kubenswrapper[5012]: I0219 06:54:08.146856 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-25qtj_10e6fa53-581b-4965-8a38-c70a5c61c6d7/manager/0.log" Feb 19 06:54:08 crc kubenswrapper[5012]: I0219 06:54:08.368763 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-nlqtw_08a4f79c-e42e-4609-b104-01b9a05ac95a/manager/0.log" Feb 19 06:54:08 crc kubenswrapper[5012]: I0219 06:54:08.724341 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mqc2w_4a3cde05-282a-4c65-9570-74d04c71a034/operator/0.log" Feb 19 06:54:08 crc kubenswrapper[5012]: I0219 06:54:08.946675 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-6hfg4_c55ed223-371b-409a-bcb6-8ca6d2a3c908/manager/0.log" Feb 19 06:54:09 crc kubenswrapper[5012]: I0219 06:54:09.382016 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-pcpk8_73e25e30-860d-4faf-b1f3-bc284f7189d1/manager/0.log" Feb 19 06:54:09 crc kubenswrapper[5012]: I0219 06:54:09.488274 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-qjpw6_49d66f3b-e451-4b73-bc6a-4b854a71a4d6/manager/0.log" Feb 19 06:54:09 crc kubenswrapper[5012]: I0219 06:54:09.774535 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-69ff7bc449-tj54n_d1f124a8-4132-458d-a5a5-1839d31e7772/manager/0.log" Feb 19 06:54:09 crc kubenswrapper[5012]: I0219 06:54:09.779490 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-z5r47_739941d0-4bff-4dae-8f01-636386a37dd0/manager/0.log" Feb 19 06:54:10 crc kubenswrapper[5012]: I0219 06:54:10.072038 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-pqrs7_ef60eda4-7ead-499b-b70f-07a34574096f/manager/0.log" Feb 19 06:54:16 crc kubenswrapper[5012]: I0219 06:54:16.190494 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-xzk2n_0cc1b41b-fbf6-4d0c-b721-dcad09c03feb/manager/0.log" Feb 19 06:54:32 crc kubenswrapper[5012]: I0219 06:54:32.989924 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mbxqf_9102ddf1-e140-48e7-9ecd-14a4c007f5d5/control-plane-machine-set-operator/0.log" Feb 19 06:54:33 crc kubenswrapper[5012]: I0219 06:54:33.163807 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6qvzq_5c537eae-5a27-4a4d-ba9e-0fd7efe72f37/kube-rbac-proxy/0.log" Feb 19 06:54:33 crc kubenswrapper[5012]: I0219 06:54:33.233542 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6qvzq_5c537eae-5a27-4a4d-ba9e-0fd7efe72f37/machine-api-operator/0.log" Feb 19 06:54:44 crc kubenswrapper[5012]: I0219 06:54:44.430230 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:54:44 crc kubenswrapper[5012]: I0219 06:54:44.430871 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:54:48 crc kubenswrapper[5012]: I0219 06:54:48.940791 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-sq68l_3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02/cert-manager-controller/0.log" Feb 19 06:54:49 crc kubenswrapper[5012]: I0219 06:54:49.086775 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-w66zf_4b5870bd-8fb3-4eef-a893-f31ce8bb1506/cert-manager-cainjector/0.log" Feb 19 06:54:49 crc kubenswrapper[5012]: I0219 06:54:49.132239 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-drndq_53138562-0907-4b72-b228-21ef0c561f57/cert-manager-webhook/0.log" Feb 19 06:55:04 crc kubenswrapper[5012]: I0219 06:55:04.372143 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-zvl62_0aad4d6c-fc60-4843-b21b-d4ad6d552d5f/nmstate-console-plugin/0.log" Feb 19 06:55:05 crc kubenswrapper[5012]: I0219 06:55:05.130461 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tdz8p_4b5e9e17-84bc-4d05-87f9-328826ea39df/nmstate-handler/0.log" Feb 19 06:55:05 crc kubenswrapper[5012]: I0219 06:55:05.188758 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-hn274_91d45b3f-23b3-4342-8168-667f665ffe82/kube-rbac-proxy/0.log" Feb 19 06:55:05 crc kubenswrapper[5012]: I0219 06:55:05.239036 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-hn274_91d45b3f-23b3-4342-8168-667f665ffe82/nmstate-metrics/0.log" Feb 19 06:55:05 crc kubenswrapper[5012]: I0219 06:55:05.340019 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-2smgj_d6ac1260-4ff8-4025-af6e-35711452ef6f/nmstate-operator/0.log" Feb 19 06:55:05 crc kubenswrapper[5012]: I0219 06:55:05.456814 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-mqtfh_50749fb3-e43e-4874-a0ea-8dabae225f85/nmstate-webhook/0.log" Feb 19 06:55:14 crc kubenswrapper[5012]: I0219 06:55:14.430994 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:55:14 crc kubenswrapper[5012]: I0219 06:55:14.431660 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:55:21 crc kubenswrapper[5012]: I0219 06:55:21.130704 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9t66t_9f3d925a-f08d-4e92-baf3-805f27c9ae35/prometheus-operator/0.log" Feb 19 06:55:21 crc kubenswrapper[5012]: I0219 06:55:21.302870 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685558f558-cddcp_9364b7f3-e3e3-4432-a4e7-4b80c9a50225/prometheus-operator-admission-webhook/0.log" Feb 19 06:55:21 crc kubenswrapper[5012]: I0219 06:55:21.339388 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685558f558-rlcjg_3c60bb85-2242-4d9f-95f9-27b2e747727d/prometheus-operator-admission-webhook/0.log" Feb 19 06:55:21 crc kubenswrapper[5012]: I0219 06:55:21.491536 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-vw7xl_63ee166b-5027-4928-9196-9488685f87d5/operator/0.log" Feb 19 06:55:21 crc kubenswrapper[5012]: I0219 06:55:21.518824 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-5grbr_86bcbf15-9553-41af-974c-3418e588e575/perses-operator/0.log" Feb 19 06:55:39 crc kubenswrapper[5012]: I0219 06:55:39.847959 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-c4jbq_fe949ecf-1cb7-47c7-b196-d4851f142c5f/kube-rbac-proxy/0.log" Feb 19 06:55:39 crc kubenswrapper[5012]: I0219 06:55:39.910743 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-c4jbq_fe949ecf-1cb7-47c7-b196-d4851f142c5f/controller/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.060078 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-frr-files/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.261288 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-frr-files/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.261292 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-metrics/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.277434 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-reloader/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.291054 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-reloader/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.454753 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-reloader/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.461110 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-frr-files/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.468966 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-metrics/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.478564 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-metrics/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.692437 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-frr-files/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.715614 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/controller/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.736921 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-reloader/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.752370 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-metrics/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.972568 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/frr-metrics/0.log" Feb 19 06:55:41 crc kubenswrapper[5012]: I0219 06:55:41.043225 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/kube-rbac-proxy/0.log" Feb 19 06:55:41 crc kubenswrapper[5012]: I0219 06:55:41.043419 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/kube-rbac-proxy-frr/0.log" Feb 19 06:55:41 crc kubenswrapper[5012]: I0219 06:55:41.195227 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/reloader/0.log" Feb 19 06:55:41 crc kubenswrapper[5012]: I0219 06:55:41.291666 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-hdb84_431a9bf4-479e-4255-9664-554c80fa4376/frr-k8s-webhook-server/0.log" Feb 19 06:55:41 crc kubenswrapper[5012]: I0219 06:55:41.502189 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-558c5c4774-9r4gj_05b78fff-bf4d-4cd6-aba9-b74303a5dd50/manager/0.log" Feb 19 06:55:41 crc kubenswrapper[5012]: I0219 06:55:41.672802 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-699bc447bd-zqv74_ec7fdada-6f6e-4d8b-b2e1-c944050c714c/webhook-server/0.log" Feb 19 06:55:41 crc kubenswrapper[5012]: I0219 06:55:41.957724 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-87ct4_82cb6684-3937-45f8-9f18-56940e88f480/kube-rbac-proxy/0.log" Feb 19 06:55:42 crc kubenswrapper[5012]: I0219 06:55:42.568223 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-87ct4_82cb6684-3937-45f8-9f18-56940e88f480/speaker/0.log" Feb 19 06:55:42 crc kubenswrapper[5012]: I0219 06:55:42.578826 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/frr/0.log" Feb 19 06:55:44 crc kubenswrapper[5012]: I0219 06:55:44.430904 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:55:44 crc kubenswrapper[5012]: I0219 06:55:44.431766 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:55:44 crc kubenswrapper[5012]: I0219 06:55:44.431857 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:55:44 crc kubenswrapper[5012]: I0219 06:55:44.433058 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"48054e4bb9edb7bcb5d43f31e62ca380e81f64c675e1f2cd4a65b9f2238ff941"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:55:44 crc kubenswrapper[5012]: I0219 06:55:44.433173 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://48054e4bb9edb7bcb5d43f31e62ca380e81f64c675e1f2cd4a65b9f2238ff941" gracePeriod=600 Feb 19 06:55:45 crc kubenswrapper[5012]: I0219 06:55:45.151681 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="48054e4bb9edb7bcb5d43f31e62ca380e81f64c675e1f2cd4a65b9f2238ff941" exitCode=0 Feb 19 06:55:45 crc kubenswrapper[5012]: I0219 06:55:45.151726 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"48054e4bb9edb7bcb5d43f31e62ca380e81f64c675e1f2cd4a65b9f2238ff941"} Feb 19 06:55:45 crc kubenswrapper[5012]: I0219 06:55:45.152575 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35"} Feb 19 06:55:45 crc kubenswrapper[5012]: I0219 06:55:45.152600 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:55:59 crc kubenswrapper[5012]: I0219 06:55:59.515702 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/util/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.108926 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/pull/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.117725 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/pull/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.133406 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/util/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.313117 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/util/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.333565 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/pull/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.338094 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/extract/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.496130 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/util/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.653437 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/util/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.666147 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/pull/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.681466 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/pull/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.877004 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/pull/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.892428 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/extract/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.927546 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/util/0.log" Feb 19 06:56:01 crc kubenswrapper[5012]: I0219 06:56:01.745456 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-utilities/0.log" Feb 19 06:56:01 crc kubenswrapper[5012]: I0219 06:56:01.919551 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-content/0.log" Feb 19 06:56:01 crc kubenswrapper[5012]: I0219 06:56:01.921757 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-utilities/0.log" Feb 19 06:56:01 crc kubenswrapper[5012]: I0219 06:56:01.983004 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-content/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.145578 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-content/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.159002 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-utilities/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.399015 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-utilities/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.682938 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-utilities/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.689791 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-content/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.708425 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-content/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.862946 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/registry-server/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.876516 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-utilities/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.918591 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-content/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.102040 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/util/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.238828 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/registry-server/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.273106 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/util/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.297629 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/pull/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.312726 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/pull/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.460120 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/pull/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.482146 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/util/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.491343 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/extract/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.525209 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jqjls_800f8349-6ef3-44ae-90a0-56c89ca82479/marketplace-operator/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.668134 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-utilities/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.838474 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-utilities/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.857500 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-content/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.871755 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-content/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.009590 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-utilities/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.032661 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-content/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.119066 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-utilities/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.217268 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/registry-server/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.482078 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-content/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.482230 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-content/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.505760 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-utilities/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.684688 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-content/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.695327 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-utilities/0.log" Feb 19 06:56:05 crc kubenswrapper[5012]: I0219 06:56:05.310074 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/registry-server/0.log" Feb 19 06:56:21 crc kubenswrapper[5012]: I0219 06:56:21.158430 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9t66t_9f3d925a-f08d-4e92-baf3-805f27c9ae35/prometheus-operator/0.log" Feb 19 06:56:21 crc kubenswrapper[5012]: I0219 06:56:21.189825 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685558f558-cddcp_9364b7f3-e3e3-4432-a4e7-4b80c9a50225/prometheus-operator-admission-webhook/0.log" Feb 19 06:56:21 crc kubenswrapper[5012]: I0219 06:56:21.192732 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685558f558-rlcjg_3c60bb85-2242-4d9f-95f9-27b2e747727d/prometheus-operator-admission-webhook/0.log" Feb 19 06:56:21 crc kubenswrapper[5012]: I0219 06:56:21.218592 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-vw7xl_63ee166b-5027-4928-9196-9488685f87d5/operator/0.log" Feb 19 06:56:21 crc kubenswrapper[5012]: I0219 06:56:21.321669 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-5grbr_86bcbf15-9553-41af-974c-3418e588e575/perses-operator/0.log" Feb 19 06:57:44 crc kubenswrapper[5012]: I0219 06:57:44.431194 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:57:44 crc kubenswrapper[5012]: I0219 06:57:44.432062 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:58:14 crc kubenswrapper[5012]: I0219 06:58:14.430543 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:58:14 crc kubenswrapper[5012]: I0219 06:58:14.431279 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:58:19 crc kubenswrapper[5012]: I0219 06:58:19.947187 5012 generic.go:334] "Generic (PLEG): container finished" podID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerID="1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828" exitCode=0 Feb 19 06:58:19 crc kubenswrapper[5012]: I0219 06:58:19.947365 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/must-gather-gs9fs" event={"ID":"5afd9390-aa19-4b48-b659-089e59ea82e5","Type":"ContainerDied","Data":"1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828"} Feb 19 06:58:19 crc kubenswrapper[5012]: I0219 06:58:19.948442 5012 scope.go:117] "RemoveContainer" containerID="1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828" Feb 19 06:58:20 crc kubenswrapper[5012]: I0219 06:58:20.473433 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hncx9_must-gather-gs9fs_5afd9390-aa19-4b48-b659-089e59ea82e5/gather/0.log" Feb 19 06:58:28 crc kubenswrapper[5012]: I0219 06:58:28.495382 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hncx9/must-gather-gs9fs"] Feb 19 06:58:28 crc kubenswrapper[5012]: I0219 06:58:28.496625 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-hncx9/must-gather-gs9fs" podUID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerName="copy" containerID="cri-o://7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48" gracePeriod=2 Feb 19 06:58:28 crc kubenswrapper[5012]: I0219 06:58:28.511047 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hncx9/must-gather-gs9fs"] Feb 19 06:58:28 crc kubenswrapper[5012]: I0219 06:58:28.906459 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hncx9_must-gather-gs9fs_5afd9390-aa19-4b48-b659-089e59ea82e5/copy/0.log" Feb 19 06:58:28 crc kubenswrapper[5012]: I0219 06:58:28.907472 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.037703 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzz4l\" (UniqueName: \"kubernetes.io/projected/5afd9390-aa19-4b48-b659-089e59ea82e5-kube-api-access-vzz4l\") pod \"5afd9390-aa19-4b48-b659-089e59ea82e5\" (UID: \"5afd9390-aa19-4b48-b659-089e59ea82e5\") " Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.037766 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5afd9390-aa19-4b48-b659-089e59ea82e5-must-gather-output\") pod \"5afd9390-aa19-4b48-b659-089e59ea82e5\" (UID: \"5afd9390-aa19-4b48-b659-089e59ea82e5\") " Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.044274 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5afd9390-aa19-4b48-b659-089e59ea82e5-kube-api-access-vzz4l" (OuterVolumeSpecName: "kube-api-access-vzz4l") pod "5afd9390-aa19-4b48-b659-089e59ea82e5" (UID: "5afd9390-aa19-4b48-b659-089e59ea82e5"). InnerVolumeSpecName "kube-api-access-vzz4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.049004 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hncx9_must-gather-gs9fs_5afd9390-aa19-4b48-b659-089e59ea82e5/copy/0.log" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.049338 5012 generic.go:334] "Generic (PLEG): container finished" podID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerID="7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48" exitCode=143 Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.049383 5012 scope.go:117] "RemoveContainer" containerID="7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.049487 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.109873 5012 scope.go:117] "RemoveContainer" containerID="1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.142036 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzz4l\" (UniqueName: \"kubernetes.io/projected/5afd9390-aa19-4b48-b659-089e59ea82e5-kube-api-access-vzz4l\") on node \"crc\" DevicePath \"\"" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.200825 5012 scope.go:117] "RemoveContainer" containerID="7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48" Feb 19 06:58:29 crc kubenswrapper[5012]: E0219 06:58:29.201282 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48\": container with ID starting with 7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48 not found: ID does not exist" containerID="7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.201337 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48"} err="failed to get container status \"7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48\": rpc error: code = NotFound desc = could not find container \"7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48\": container with ID starting with 7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48 not found: ID does not exist" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.201364 5012 scope.go:117] "RemoveContainer" containerID="1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828" Feb 19 06:58:29 crc kubenswrapper[5012]: E0219 06:58:29.201612 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828\": container with ID starting with 1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828 not found: ID does not exist" containerID="1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.201634 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828"} err="failed to get container status \"1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828\": rpc error: code = NotFound desc = could not find container \"1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828\": container with ID starting with 1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828 not found: ID does not exist" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.237063 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5afd9390-aa19-4b48-b659-089e59ea82e5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5afd9390-aa19-4b48-b659-089e59ea82e5" (UID: "5afd9390-aa19-4b48-b659-089e59ea82e5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.246633 5012 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5afd9390-aa19-4b48-b659-089e59ea82e5-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 06:58:30 crc kubenswrapper[5012]: I0219 06:58:30.714663 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5afd9390-aa19-4b48-b659-089e59ea82e5" path="/var/lib/kubelet/pods/5afd9390-aa19-4b48-b659-089e59ea82e5/volumes" Feb 19 06:58:43 crc kubenswrapper[5012]: I0219 06:58:43.303105 5012 scope.go:117] "RemoveContainer" containerID="5f63a56f4608620beb3ed89096dc42c009d98d6c2c1d04eec01e0fcc60d308fd" Feb 19 06:58:44 crc kubenswrapper[5012]: I0219 06:58:44.430942 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:58:44 crc kubenswrapper[5012]: I0219 06:58:44.432256 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:58:44 crc kubenswrapper[5012]: I0219 06:58:44.432480 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:58:44 crc kubenswrapper[5012]: I0219 06:58:44.433604 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:58:44 crc kubenswrapper[5012]: I0219 06:58:44.433837 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" gracePeriod=600 Feb 19 06:58:44 crc kubenswrapper[5012]: E0219 06:58:44.571044 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:58:45 crc kubenswrapper[5012]: I0219 06:58:45.271921 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" exitCode=0 Feb 19 06:58:45 crc kubenswrapper[5012]: I0219 06:58:45.271979 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35"} Feb 19 06:58:45 crc kubenswrapper[5012]: I0219 06:58:45.272011 5012 scope.go:117] "RemoveContainer" containerID="48054e4bb9edb7bcb5d43f31e62ca380e81f64c675e1f2cd4a65b9f2238ff941" Feb 19 06:58:45 crc kubenswrapper[5012]: I0219 06:58:45.273023 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 06:58:45 crc kubenswrapper[5012]: E0219 06:58:45.273656 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:58:58 crc kubenswrapper[5012]: I0219 06:58:58.704215 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 06:58:58 crc kubenswrapper[5012]: E0219 06:58:58.706192 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:59:12 crc kubenswrapper[5012]: I0219 06:59:12.705610 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 06:59:12 crc kubenswrapper[5012]: E0219 06:59:12.706373 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:59:23 crc kubenswrapper[5012]: I0219 06:59:23.704329 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 06:59:23 crc kubenswrapper[5012]: E0219 06:59:23.705485 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:59:37 crc kubenswrapper[5012]: I0219 06:59:37.703621 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 06:59:37 crc kubenswrapper[5012]: E0219 06:59:37.704804 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:59:52 crc kubenswrapper[5012]: I0219 06:59:52.703610 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 06:59:52 crc kubenswrapper[5012]: E0219 06:59:52.704367 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.170996 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk"] Feb 19 07:00:00 crc kubenswrapper[5012]: E0219 07:00:00.172827 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerName="gather" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.172863 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerName="gather" Feb 19 07:00:00 crc kubenswrapper[5012]: E0219 07:00:00.172892 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerName="copy" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.172913 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerName="copy" Feb 19 07:00:00 crc kubenswrapper[5012]: E0219 07:00:00.172974 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e79e07d-bc20-4488-8ebe-4805bf39854e" containerName="container-00" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.172991 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e79e07d-bc20-4488-8ebe-4805bf39854e" containerName="container-00" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.173498 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerName="copy" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.173550 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e79e07d-bc20-4488-8ebe-4805bf39854e" containerName="container-00" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.173596 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerName="gather" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.175275 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.178547 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.178956 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.188371 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk"] Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.313187 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a395030-9ca2-4aae-b3d5-1a1a58029659-config-volume\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.313268 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hvdm\" (UniqueName: \"kubernetes.io/projected/6a395030-9ca2-4aae-b3d5-1a1a58029659-kube-api-access-4hvdm\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.313352 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a395030-9ca2-4aae-b3d5-1a1a58029659-secret-volume\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.415557 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a395030-9ca2-4aae-b3d5-1a1a58029659-secret-volume\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.415733 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a395030-9ca2-4aae-b3d5-1a1a58029659-config-volume\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.415805 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hvdm\" (UniqueName: \"kubernetes.io/projected/6a395030-9ca2-4aae-b3d5-1a1a58029659-kube-api-access-4hvdm\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.416669 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a395030-9ca2-4aae-b3d5-1a1a58029659-config-volume\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.435102 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a395030-9ca2-4aae-b3d5-1a1a58029659-secret-volume\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.440361 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hvdm\" (UniqueName: \"kubernetes.io/projected/6a395030-9ca2-4aae-b3d5-1a1a58029659-kube-api-access-4hvdm\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.505935 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.987066 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk"] Feb 19 07:00:01 crc kubenswrapper[5012]: I0219 07:00:01.254573 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" event={"ID":"6a395030-9ca2-4aae-b3d5-1a1a58029659","Type":"ContainerStarted","Data":"f1dc67997b3e5078839b18d57a364d21c0305f67aed62572ceddbff97fbef117"} Feb 19 07:00:01 crc kubenswrapper[5012]: I0219 07:00:01.254903 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" event={"ID":"6a395030-9ca2-4aae-b3d5-1a1a58029659","Type":"ContainerStarted","Data":"1cd3a44b5b379c621a9322bfcda0f41950bd313bd82a9045cbd6665063592022"} Feb 19 07:00:01 crc kubenswrapper[5012]: I0219 07:00:01.272008 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" podStartSLOduration=1.271992097 podStartE2EDuration="1.271992097s" podCreationTimestamp="2026-02-19 07:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 07:00:01.267154549 +0000 UTC m=+5697.300477118" watchObservedRunningTime="2026-02-19 07:00:01.271992097 +0000 UTC m=+5697.305314666" Feb 19 07:00:02 crc kubenswrapper[5012]: I0219 07:00:02.270155 5012 generic.go:334] "Generic (PLEG): container finished" podID="6a395030-9ca2-4aae-b3d5-1a1a58029659" containerID="f1dc67997b3e5078839b18d57a364d21c0305f67aed62572ceddbff97fbef117" exitCode=0 Feb 19 07:00:02 crc kubenswrapper[5012]: I0219 07:00:02.270595 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" event={"ID":"6a395030-9ca2-4aae-b3d5-1a1a58029659","Type":"ContainerDied","Data":"f1dc67997b3e5078839b18d57a364d21c0305f67aed62572ceddbff97fbef117"} Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.657696 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.797216 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a395030-9ca2-4aae-b3d5-1a1a58029659-config-volume\") pod \"6a395030-9ca2-4aae-b3d5-1a1a58029659\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.797480 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a395030-9ca2-4aae-b3d5-1a1a58029659-secret-volume\") pod \"6a395030-9ca2-4aae-b3d5-1a1a58029659\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.797668 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hvdm\" (UniqueName: \"kubernetes.io/projected/6a395030-9ca2-4aae-b3d5-1a1a58029659-kube-api-access-4hvdm\") pod \"6a395030-9ca2-4aae-b3d5-1a1a58029659\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.797793 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a395030-9ca2-4aae-b3d5-1a1a58029659-config-volume" (OuterVolumeSpecName: "config-volume") pod "6a395030-9ca2-4aae-b3d5-1a1a58029659" (UID: "6a395030-9ca2-4aae-b3d5-1a1a58029659"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.798781 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a395030-9ca2-4aae-b3d5-1a1a58029659-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.804507 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a395030-9ca2-4aae-b3d5-1a1a58029659-kube-api-access-4hvdm" (OuterVolumeSpecName: "kube-api-access-4hvdm") pod "6a395030-9ca2-4aae-b3d5-1a1a58029659" (UID: "6a395030-9ca2-4aae-b3d5-1a1a58029659"). InnerVolumeSpecName "kube-api-access-4hvdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.805502 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a395030-9ca2-4aae-b3d5-1a1a58029659-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6a395030-9ca2-4aae-b3d5-1a1a58029659" (UID: "6a395030-9ca2-4aae-b3d5-1a1a58029659"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.901726 5012 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a395030-9ca2-4aae-b3d5-1a1a58029659-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.901768 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hvdm\" (UniqueName: \"kubernetes.io/projected/6a395030-9ca2-4aae-b3d5-1a1a58029659-kube-api-access-4hvdm\") on node \"crc\" DevicePath \"\"" Feb 19 07:00:04 crc kubenswrapper[5012]: I0219 07:00:04.316925 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" event={"ID":"6a395030-9ca2-4aae-b3d5-1a1a58029659","Type":"ContainerDied","Data":"1cd3a44b5b379c621a9322bfcda0f41950bd313bd82a9045cbd6665063592022"} Feb 19 07:00:04 crc kubenswrapper[5012]: I0219 07:00:04.316975 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:04 crc kubenswrapper[5012]: I0219 07:00:04.316978 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cd3a44b5b379c621a9322bfcda0f41950bd313bd82a9045cbd6665063592022" Feb 19 07:00:04 crc kubenswrapper[5012]: I0219 07:00:04.386187 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt"] Feb 19 07:00:04 crc kubenswrapper[5012]: I0219 07:00:04.403838 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt"] Feb 19 07:00:04 crc kubenswrapper[5012]: I0219 07:00:04.717567 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d1557b7-91d6-4aac-8306-59d97142a76c" path="/var/lib/kubelet/pods/0d1557b7-91d6-4aac-8306-59d97142a76c/volumes" Feb 19 07:00:07 crc kubenswrapper[5012]: I0219 07:00:07.703563 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:00:07 crc kubenswrapper[5012]: E0219 07:00:07.704538 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:00:21 crc kubenswrapper[5012]: I0219 07:00:21.704391 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:00:21 crc kubenswrapper[5012]: E0219 07:00:21.705674 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:00:32 crc kubenswrapper[5012]: I0219 07:00:32.707374 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:00:32 crc kubenswrapper[5012]: E0219 07:00:32.708392 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:00:43 crc kubenswrapper[5012]: I0219 07:00:43.420973 5012 scope.go:117] "RemoveContainer" containerID="1a2cb819f1490aeaeb6e29cd5e196789ce8e9978f4d9987b6edfc7cea46ee158" Feb 19 07:00:44 crc kubenswrapper[5012]: I0219 07:00:44.715047 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:00:44 crc kubenswrapper[5012]: E0219 07:00:44.715878 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:00:55 crc kubenswrapper[5012]: I0219 07:00:55.703158 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:00:55 crc kubenswrapper[5012]: E0219 07:00:55.704335 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.164603 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524741-6zcg8"] Feb 19 07:01:00 crc kubenswrapper[5012]: E0219 07:01:00.165661 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a395030-9ca2-4aae-b3d5-1a1a58029659" containerName="collect-profiles" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.165678 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a395030-9ca2-4aae-b3d5-1a1a58029659" containerName="collect-profiles" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.165942 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a395030-9ca2-4aae-b3d5-1a1a58029659" containerName="collect-profiles" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.166786 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.180993 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524741-6zcg8"] Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.269937 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-combined-ca-bundle\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.270039 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-fernet-keys\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.270241 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-config-data\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.270287 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgmbw\" (UniqueName: \"kubernetes.io/projected/033cc9db-2d87-48a6-8854-4d3a922a38d2-kube-api-access-kgmbw\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.372627 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-combined-ca-bundle\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.372712 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-fernet-keys\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.372919 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-config-data\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.372963 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgmbw\" (UniqueName: \"kubernetes.io/projected/033cc9db-2d87-48a6-8854-4d3a922a38d2-kube-api-access-kgmbw\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.381705 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-config-data\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.381719 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-fernet-keys\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.387737 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-combined-ca-bundle\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.391603 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgmbw\" (UniqueName: \"kubernetes.io/projected/033cc9db-2d87-48a6-8854-4d3a922a38d2-kube-api-access-kgmbw\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.499742 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:01 crc kubenswrapper[5012]: I0219 07:01:01.033942 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524741-6zcg8"] Feb 19 07:01:02 crc kubenswrapper[5012]: I0219 07:01:02.032209 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524741-6zcg8" event={"ID":"033cc9db-2d87-48a6-8854-4d3a922a38d2","Type":"ContainerStarted","Data":"010802bde907dd8632f62a4a6bfc6c99ea00d590f5191615ac5b4d8228c987b0"} Feb 19 07:01:02 crc kubenswrapper[5012]: I0219 07:01:02.032570 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524741-6zcg8" event={"ID":"033cc9db-2d87-48a6-8854-4d3a922a38d2","Type":"ContainerStarted","Data":"d041cb0aa86c139e4c0d207a084924091052e1d734602446ac5dfe526869bf4a"} Feb 19 07:01:02 crc kubenswrapper[5012]: I0219 07:01:02.054812 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29524741-6zcg8" podStartSLOduration=2.054795178 podStartE2EDuration="2.054795178s" podCreationTimestamp="2026-02-19 07:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 07:01:02.049311635 +0000 UTC m=+5758.082634204" watchObservedRunningTime="2026-02-19 07:01:02.054795178 +0000 UTC m=+5758.088117747" Feb 19 07:01:06 crc kubenswrapper[5012]: I0219 07:01:06.078048 5012 generic.go:334] "Generic (PLEG): container finished" podID="033cc9db-2d87-48a6-8854-4d3a922a38d2" containerID="010802bde907dd8632f62a4a6bfc6c99ea00d590f5191615ac5b4d8228c987b0" exitCode=0 Feb 19 07:01:06 crc kubenswrapper[5012]: I0219 07:01:06.078509 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524741-6zcg8" event={"ID":"033cc9db-2d87-48a6-8854-4d3a922a38d2","Type":"ContainerDied","Data":"010802bde907dd8632f62a4a6bfc6c99ea00d590f5191615ac5b4d8228c987b0"} Feb 19 07:01:06 crc kubenswrapper[5012]: I0219 07:01:06.703691 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:01:06 crc kubenswrapper[5012]: E0219 07:01:06.704507 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.548401 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.648870 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-fernet-keys\") pod \"033cc9db-2d87-48a6-8854-4d3a922a38d2\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.648952 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-config-data\") pod \"033cc9db-2d87-48a6-8854-4d3a922a38d2\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.648986 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgmbw\" (UniqueName: \"kubernetes.io/projected/033cc9db-2d87-48a6-8854-4d3a922a38d2-kube-api-access-kgmbw\") pod \"033cc9db-2d87-48a6-8854-4d3a922a38d2\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.649304 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-combined-ca-bundle\") pod \"033cc9db-2d87-48a6-8854-4d3a922a38d2\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.661234 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "033cc9db-2d87-48a6-8854-4d3a922a38d2" (UID: "033cc9db-2d87-48a6-8854-4d3a922a38d2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.661830 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/033cc9db-2d87-48a6-8854-4d3a922a38d2-kube-api-access-kgmbw" (OuterVolumeSpecName: "kube-api-access-kgmbw") pod "033cc9db-2d87-48a6-8854-4d3a922a38d2" (UID: "033cc9db-2d87-48a6-8854-4d3a922a38d2"). InnerVolumeSpecName "kube-api-access-kgmbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.695235 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "033cc9db-2d87-48a6-8854-4d3a922a38d2" (UID: "033cc9db-2d87-48a6-8854-4d3a922a38d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.718639 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-config-data" (OuterVolumeSpecName: "config-data") pod "033cc9db-2d87-48a6-8854-4d3a922a38d2" (UID: "033cc9db-2d87-48a6-8854-4d3a922a38d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.752061 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.752104 5012 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.752116 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.752128 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgmbw\" (UniqueName: \"kubernetes.io/projected/033cc9db-2d87-48a6-8854-4d3a922a38d2-kube-api-access-kgmbw\") on node \"crc\" DevicePath \"\"" Feb 19 07:01:08 crc kubenswrapper[5012]: I0219 07:01:08.106614 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524741-6zcg8" event={"ID":"033cc9db-2d87-48a6-8854-4d3a922a38d2","Type":"ContainerDied","Data":"d041cb0aa86c139e4c0d207a084924091052e1d734602446ac5dfe526869bf4a"} Feb 19 07:01:08 crc kubenswrapper[5012]: I0219 07:01:08.106691 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d041cb0aa86c139e4c0d207a084924091052e1d734602446ac5dfe526869bf4a" Feb 19 07:01:08 crc kubenswrapper[5012]: I0219 07:01:08.106700 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:18 crc kubenswrapper[5012]: I0219 07:01:18.703336 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:01:18 crc kubenswrapper[5012]: E0219 07:01:18.704549 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:01:32 crc kubenswrapper[5012]: I0219 07:01:32.703845 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:01:32 crc kubenswrapper[5012]: E0219 07:01:32.704930 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.607486 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xcmr2"] Feb 19 07:01:38 crc kubenswrapper[5012]: E0219 07:01:38.608458 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="033cc9db-2d87-48a6-8854-4d3a922a38d2" containerName="keystone-cron" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.608471 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="033cc9db-2d87-48a6-8854-4d3a922a38d2" containerName="keystone-cron" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.608655 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="033cc9db-2d87-48a6-8854-4d3a922a38d2" containerName="keystone-cron" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.610004 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.623436 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcmr2"] Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.737006 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-catalog-content\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.737183 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-utilities\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.737476 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlkcv\" (UniqueName: \"kubernetes.io/projected/3f830c27-7555-43b2-a77d-e6bc05150b6e-kube-api-access-tlkcv\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.839190 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-catalog-content\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.839253 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-utilities\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.839364 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlkcv\" (UniqueName: \"kubernetes.io/projected/3f830c27-7555-43b2-a77d-e6bc05150b6e-kube-api-access-tlkcv\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.839933 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-catalog-content\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.840461 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-utilities\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.868025 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlkcv\" (UniqueName: \"kubernetes.io/projected/3f830c27-7555-43b2-a77d-e6bc05150b6e-kube-api-access-tlkcv\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.948437 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:39 crc kubenswrapper[5012]: I0219 07:01:39.450042 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcmr2"] Feb 19 07:01:39 crc kubenswrapper[5012]: I0219 07:01:39.470275 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmr2" event={"ID":"3f830c27-7555-43b2-a77d-e6bc05150b6e","Type":"ContainerStarted","Data":"017c2066f6389ee51fc586037fd17ca7f470bb393e6d0c4c4927f4cae8cf8d41"} Feb 19 07:01:40 crc kubenswrapper[5012]: I0219 07:01:40.491177 5012 generic.go:334] "Generic (PLEG): container finished" podID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerID="f41315fc3d6f34513f97bf91d9cdd999354b5e17f3bc12e24f6931f5e2179b3e" exitCode=0 Feb 19 07:01:40 crc kubenswrapper[5012]: I0219 07:01:40.491269 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmr2" event={"ID":"3f830c27-7555-43b2-a77d-e6bc05150b6e","Type":"ContainerDied","Data":"f41315fc3d6f34513f97bf91d9cdd999354b5e17f3bc12e24f6931f5e2179b3e"} Feb 19 07:01:40 crc kubenswrapper[5012]: I0219 07:01:40.496955 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 07:01:41 crc kubenswrapper[5012]: I0219 07:01:41.506239 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmr2" event={"ID":"3f830c27-7555-43b2-a77d-e6bc05150b6e","Type":"ContainerStarted","Data":"c33e2694d77366551c98e8486a7d8298bcc8806ee0a8ca992cf2d7fa38d1bc81"} Feb 19 07:01:43 crc kubenswrapper[5012]: I0219 07:01:43.533368 5012 generic.go:334] "Generic (PLEG): container finished" podID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerID="c33e2694d77366551c98e8486a7d8298bcc8806ee0a8ca992cf2d7fa38d1bc81" exitCode=0 Feb 19 07:01:43 crc kubenswrapper[5012]: I0219 07:01:43.533380 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmr2" event={"ID":"3f830c27-7555-43b2-a77d-e6bc05150b6e","Type":"ContainerDied","Data":"c33e2694d77366551c98e8486a7d8298bcc8806ee0a8ca992cf2d7fa38d1bc81"} Feb 19 07:01:43 crc kubenswrapper[5012]: I0219 07:01:43.704148 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:01:43 crc kubenswrapper[5012]: E0219 07:01:43.704781 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:01:44 crc kubenswrapper[5012]: I0219 07:01:44.560987 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmr2" event={"ID":"3f830c27-7555-43b2-a77d-e6bc05150b6e","Type":"ContainerStarted","Data":"8e2d95b56164517e67558645952cafad9c0eebeb6bee0e8a92fd831955e50d23"} Feb 19 07:01:44 crc kubenswrapper[5012]: I0219 07:01:44.588334 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xcmr2" podStartSLOduration=3.168040548 podStartE2EDuration="6.588309123s" podCreationTimestamp="2026-02-19 07:01:38 +0000 UTC" firstStartedPulling="2026-02-19 07:01:40.496290364 +0000 UTC m=+5796.529612973" lastFinishedPulling="2026-02-19 07:01:43.916558959 +0000 UTC m=+5799.949881548" observedRunningTime="2026-02-19 07:01:44.585258949 +0000 UTC m=+5800.618581518" watchObservedRunningTime="2026-02-19 07:01:44.588309123 +0000 UTC m=+5800.621631692" Feb 19 07:01:48 crc kubenswrapper[5012]: I0219 07:01:48.949373 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:48 crc kubenswrapper[5012]: I0219 07:01:48.950656 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:49 crc kubenswrapper[5012]: I0219 07:01:49.012575 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:49 crc kubenswrapper[5012]: I0219 07:01:49.709692 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:50 crc kubenswrapper[5012]: I0219 07:01:50.268397 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcmr2"] Feb 19 07:01:51 crc kubenswrapper[5012]: I0219 07:01:51.647907 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xcmr2" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerName="registry-server" containerID="cri-o://8e2d95b56164517e67558645952cafad9c0eebeb6bee0e8a92fd831955e50d23" gracePeriod=2 Feb 19 07:01:52 crc kubenswrapper[5012]: I0219 07:01:52.660166 5012 generic.go:334] "Generic (PLEG): container finished" podID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerID="8e2d95b56164517e67558645952cafad9c0eebeb6bee0e8a92fd831955e50d23" exitCode=0 Feb 19 07:01:52 crc kubenswrapper[5012]: I0219 07:01:52.660289 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmr2" event={"ID":"3f830c27-7555-43b2-a77d-e6bc05150b6e","Type":"ContainerDied","Data":"8e2d95b56164517e67558645952cafad9c0eebeb6bee0e8a92fd831955e50d23"} Feb 19 07:01:52 crc kubenswrapper[5012]: I0219 07:01:52.823862 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:52 crc kubenswrapper[5012]: I0219 07:01:52.953382 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-catalog-content\") pod \"3f830c27-7555-43b2-a77d-e6bc05150b6e\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " Feb 19 07:01:52 crc kubenswrapper[5012]: I0219 07:01:52.953473 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-utilities\") pod \"3f830c27-7555-43b2-a77d-e6bc05150b6e\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " Feb 19 07:01:52 crc kubenswrapper[5012]: I0219 07:01:52.953516 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlkcv\" (UniqueName: \"kubernetes.io/projected/3f830c27-7555-43b2-a77d-e6bc05150b6e-kube-api-access-tlkcv\") pod \"3f830c27-7555-43b2-a77d-e6bc05150b6e\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " Feb 19 07:01:52 crc kubenswrapper[5012]: I0219 07:01:52.954825 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-utilities" (OuterVolumeSpecName: "utilities") pod "3f830c27-7555-43b2-a77d-e6bc05150b6e" (UID: "3f830c27-7555-43b2-a77d-e6bc05150b6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:01:52 crc kubenswrapper[5012]: I0219 07:01:52.959413 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f830c27-7555-43b2-a77d-e6bc05150b6e-kube-api-access-tlkcv" (OuterVolumeSpecName: "kube-api-access-tlkcv") pod "3f830c27-7555-43b2-a77d-e6bc05150b6e" (UID: "3f830c27-7555-43b2-a77d-e6bc05150b6e"). InnerVolumeSpecName "kube-api-access-tlkcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.008796 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f830c27-7555-43b2-a77d-e6bc05150b6e" (UID: "3f830c27-7555-43b2-a77d-e6bc05150b6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.056663 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.056715 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.056735 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlkcv\" (UniqueName: \"kubernetes.io/projected/3f830c27-7555-43b2-a77d-e6bc05150b6e-kube-api-access-tlkcv\") on node \"crc\" DevicePath \"\"" Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.676734 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmr2" event={"ID":"3f830c27-7555-43b2-a77d-e6bc05150b6e","Type":"ContainerDied","Data":"017c2066f6389ee51fc586037fd17ca7f470bb393e6d0c4c4927f4cae8cf8d41"} Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.677147 5012 scope.go:117] "RemoveContainer" containerID="8e2d95b56164517e67558645952cafad9c0eebeb6bee0e8a92fd831955e50d23" Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.677445 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.706106 5012 scope.go:117] "RemoveContainer" containerID="c33e2694d77366551c98e8486a7d8298bcc8806ee0a8ca992cf2d7fa38d1bc81" Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.740830 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcmr2"] Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.755014 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xcmr2"] Feb 19 07:01:54 crc kubenswrapper[5012]: I0219 07:01:54.653890 5012 scope.go:117] "RemoveContainer" containerID="f41315fc3d6f34513f97bf91d9cdd999354b5e17f3bc12e24f6931f5e2179b3e" Feb 19 07:01:54 crc kubenswrapper[5012]: I0219 07:01:54.729953 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" path="/var/lib/kubelet/pods/3f830c27-7555-43b2-a77d-e6bc05150b6e/volumes" Feb 19 07:01:58 crc kubenswrapper[5012]: I0219 07:01:58.703671 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:01:58 crc kubenswrapper[5012]: E0219 07:01:58.704986 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.724113 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8nbsb/must-gather-znn9c"] Feb 19 07:02:10 crc kubenswrapper[5012]: E0219 07:02:10.725248 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerName="registry-server" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.725262 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerName="registry-server" Feb 19 07:02:10 crc kubenswrapper[5012]: E0219 07:02:10.725275 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerName="extract-utilities" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.725281 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerName="extract-utilities" Feb 19 07:02:10 crc kubenswrapper[5012]: E0219 07:02:10.725314 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerName="extract-content" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.725321 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerName="extract-content" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.725496 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerName="registry-server" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.726510 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.728791 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8nbsb"/"default-dockercfg-5pvfq" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.728793 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8nbsb"/"openshift-service-ca.crt" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.730400 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8nbsb"/"kube-root-ca.crt" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.776548 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/91bc1236-3737-44f8-a82a-35044bd3258b-must-gather-output\") pod \"must-gather-znn9c\" (UID: \"91bc1236-3737-44f8-a82a-35044bd3258b\") " pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.777026 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh8mp\" (UniqueName: \"kubernetes.io/projected/91bc1236-3737-44f8-a82a-35044bd3258b-kube-api-access-fh8mp\") pod \"must-gather-znn9c\" (UID: \"91bc1236-3737-44f8-a82a-35044bd3258b\") " pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.821952 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8nbsb/must-gather-znn9c"] Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.878543 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/91bc1236-3737-44f8-a82a-35044bd3258b-must-gather-output\") pod \"must-gather-znn9c\" (UID: \"91bc1236-3737-44f8-a82a-35044bd3258b\") " pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.878651 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh8mp\" (UniqueName: \"kubernetes.io/projected/91bc1236-3737-44f8-a82a-35044bd3258b-kube-api-access-fh8mp\") pod \"must-gather-znn9c\" (UID: \"91bc1236-3737-44f8-a82a-35044bd3258b\") " pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.879015 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/91bc1236-3737-44f8-a82a-35044bd3258b-must-gather-output\") pod \"must-gather-znn9c\" (UID: \"91bc1236-3737-44f8-a82a-35044bd3258b\") " pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.897175 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh8mp\" (UniqueName: \"kubernetes.io/projected/91bc1236-3737-44f8-a82a-35044bd3258b-kube-api-access-fh8mp\") pod \"must-gather-znn9c\" (UID: \"91bc1236-3737-44f8-a82a-35044bd3258b\") " pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:02:11 crc kubenswrapper[5012]: I0219 07:02:11.049327 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:02:11 crc kubenswrapper[5012]: I0219 07:02:11.578257 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8nbsb/must-gather-znn9c"] Feb 19 07:02:11 crc kubenswrapper[5012]: I0219 07:02:11.950516 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/must-gather-znn9c" event={"ID":"91bc1236-3737-44f8-a82a-35044bd3258b","Type":"ContainerStarted","Data":"746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251"} Feb 19 07:02:11 crc kubenswrapper[5012]: I0219 07:02:11.950919 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/must-gather-znn9c" event={"ID":"91bc1236-3737-44f8-a82a-35044bd3258b","Type":"ContainerStarted","Data":"156e99b97364e87cddf166ac671f99b3f88230e0a4aeb448abb7212f6a34076e"} Feb 19 07:02:12 crc kubenswrapper[5012]: I0219 07:02:12.706996 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:02:12 crc kubenswrapper[5012]: E0219 07:02:12.708223 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:02:12 crc kubenswrapper[5012]: I0219 07:02:12.967510 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/must-gather-znn9c" event={"ID":"91bc1236-3737-44f8-a82a-35044bd3258b","Type":"ContainerStarted","Data":"e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d"} Feb 19 07:02:13 crc kubenswrapper[5012]: I0219 07:02:13.000671 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8nbsb/must-gather-znn9c" podStartSLOduration=3.000653037 podStartE2EDuration="3.000653037s" podCreationTimestamp="2026-02-19 07:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 07:02:12.995575044 +0000 UTC m=+5829.028897623" watchObservedRunningTime="2026-02-19 07:02:13.000653037 +0000 UTC m=+5829.033975616" Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.117452 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-t2lv8"] Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.122346 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.195377 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-host\") pod \"crc-debug-t2lv8\" (UID: \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\") " pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.195571 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdxrl\" (UniqueName: \"kubernetes.io/projected/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-kube-api-access-zdxrl\") pod \"crc-debug-t2lv8\" (UID: \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\") " pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.297736 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-host\") pod \"crc-debug-t2lv8\" (UID: \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\") " pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.297843 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxrl\" (UniqueName: \"kubernetes.io/projected/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-kube-api-access-zdxrl\") pod \"crc-debug-t2lv8\" (UID: \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\") " pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.297894 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-host\") pod \"crc-debug-t2lv8\" (UID: \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\") " pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.320104 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdxrl\" (UniqueName: \"kubernetes.io/projected/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-kube-api-access-zdxrl\") pod \"crc-debug-t2lv8\" (UID: \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\") " pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.441398 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:17 crc kubenswrapper[5012]: I0219 07:02:17.046699 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" event={"ID":"6087189c-c5d3-4586-95a7-3d7cfd01b5f2","Type":"ContainerStarted","Data":"8925602fab983c962e968ddbebc86a948cd3945bd659b4613398d3aca81b02b2"} Feb 19 07:02:17 crc kubenswrapper[5012]: I0219 07:02:17.047002 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" event={"ID":"6087189c-c5d3-4586-95a7-3d7cfd01b5f2","Type":"ContainerStarted","Data":"2fb250eca206a6fe964f945abc50fa70f4f03b4ced584e3c47672f73313f6c80"} Feb 19 07:02:17 crc kubenswrapper[5012]: I0219 07:02:17.071814 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" podStartSLOduration=1.071797818 podStartE2EDuration="1.071797818s" podCreationTimestamp="2026-02-19 07:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 07:02:17.063762673 +0000 UTC m=+5833.097085242" watchObservedRunningTime="2026-02-19 07:02:17.071797818 +0000 UTC m=+5833.105120387" Feb 19 07:02:25 crc kubenswrapper[5012]: I0219 07:02:25.704222 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:02:25 crc kubenswrapper[5012]: E0219 07:02:25.707405 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:02:37 crc kubenswrapper[5012]: I0219 07:02:37.703044 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:02:37 crc kubenswrapper[5012]: E0219 07:02:37.704232 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.046672 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zwzdr"] Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.069027 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zwzdr"] Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.069601 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.136872 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-utilities\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.136931 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmjm9\" (UniqueName: \"kubernetes.io/projected/f1690cd8-3b2d-461b-810a-4958ef591f15-kube-api-access-cmjm9\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.136968 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-catalog-content\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.240951 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-utilities\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.241091 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmjm9\" (UniqueName: \"kubernetes.io/projected/f1690cd8-3b2d-461b-810a-4958ef591f15-kube-api-access-cmjm9\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.241187 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-catalog-content\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.242859 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-utilities\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.242917 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-catalog-content\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.275190 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmjm9\" (UniqueName: \"kubernetes.io/projected/f1690cd8-3b2d-461b-810a-4958ef591f15-kube-api-access-cmjm9\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.408769 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.885997 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zwzdr"] Feb 19 07:02:50 crc kubenswrapper[5012]: I0219 07:02:50.365892 5012 generic.go:334] "Generic (PLEG): container finished" podID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerID="2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8" exitCode=0 Feb 19 07:02:50 crc kubenswrapper[5012]: I0219 07:02:50.365990 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwzdr" event={"ID":"f1690cd8-3b2d-461b-810a-4958ef591f15","Type":"ContainerDied","Data":"2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8"} Feb 19 07:02:50 crc kubenswrapper[5012]: I0219 07:02:50.366152 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwzdr" event={"ID":"f1690cd8-3b2d-461b-810a-4958ef591f15","Type":"ContainerStarted","Data":"dfe2599a1e23379af3070d43b629a6fe3b0a2d40d5bdd99f900388c40aebed26"} Feb 19 07:02:51 crc kubenswrapper[5012]: I0219 07:02:51.704064 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:02:51 crc kubenswrapper[5012]: E0219 07:02:51.704608 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:02:52 crc kubenswrapper[5012]: I0219 07:02:52.384878 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwzdr" event={"ID":"f1690cd8-3b2d-461b-810a-4958ef591f15","Type":"ContainerStarted","Data":"82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75"} Feb 19 07:02:55 crc kubenswrapper[5012]: I0219 07:02:55.417972 5012 generic.go:334] "Generic (PLEG): container finished" podID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerID="82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75" exitCode=0 Feb 19 07:02:55 crc kubenswrapper[5012]: I0219 07:02:55.418058 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwzdr" event={"ID":"f1690cd8-3b2d-461b-810a-4958ef591f15","Type":"ContainerDied","Data":"82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75"} Feb 19 07:02:56 crc kubenswrapper[5012]: I0219 07:02:56.432880 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwzdr" event={"ID":"f1690cd8-3b2d-461b-810a-4958ef591f15","Type":"ContainerStarted","Data":"5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2"} Feb 19 07:02:56 crc kubenswrapper[5012]: I0219 07:02:56.434826 5012 generic.go:334] "Generic (PLEG): container finished" podID="6087189c-c5d3-4586-95a7-3d7cfd01b5f2" containerID="8925602fab983c962e968ddbebc86a948cd3945bd659b4613398d3aca81b02b2" exitCode=0 Feb 19 07:02:56 crc kubenswrapper[5012]: I0219 07:02:56.434876 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" event={"ID":"6087189c-c5d3-4586-95a7-3d7cfd01b5f2","Type":"ContainerDied","Data":"8925602fab983c962e968ddbebc86a948cd3945bd659b4613398d3aca81b02b2"} Feb 19 07:02:56 crc kubenswrapper[5012]: I0219 07:02:56.459416 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zwzdr" podStartSLOduration=2.004872331 podStartE2EDuration="7.459387939s" podCreationTimestamp="2026-02-19 07:02:49 +0000 UTC" firstStartedPulling="2026-02-19 07:02:50.367806051 +0000 UTC m=+5866.401128620" lastFinishedPulling="2026-02-19 07:02:55.822321659 +0000 UTC m=+5871.855644228" observedRunningTime="2026-02-19 07:02:56.44956118 +0000 UTC m=+5872.482883789" watchObservedRunningTime="2026-02-19 07:02:56.459387939 +0000 UTC m=+5872.492710518" Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.573432 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.614286 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-t2lv8"] Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.626188 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-t2lv8"] Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.708051 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdxrl\" (UniqueName: \"kubernetes.io/projected/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-kube-api-access-zdxrl\") pod \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\" (UID: \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\") " Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.708319 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-host\") pod \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\" (UID: \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\") " Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.708387 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-host" (OuterVolumeSpecName: "host") pod "6087189c-c5d3-4586-95a7-3d7cfd01b5f2" (UID: "6087189c-c5d3-4586-95a7-3d7cfd01b5f2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.708788 5012 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-host\") on node \"crc\" DevicePath \"\"" Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.715267 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-kube-api-access-zdxrl" (OuterVolumeSpecName: "kube-api-access-zdxrl") pod "6087189c-c5d3-4586-95a7-3d7cfd01b5f2" (UID: "6087189c-c5d3-4586-95a7-3d7cfd01b5f2"). InnerVolumeSpecName "kube-api-access-zdxrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.813646 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdxrl\" (UniqueName: \"kubernetes.io/projected/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-kube-api-access-zdxrl\") on node \"crc\" DevicePath \"\"" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.454056 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fb250eca206a6fe964f945abc50fa70f4f03b4ced584e3c47672f73313f6c80" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.454143 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.715990 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6087189c-c5d3-4586-95a7-3d7cfd01b5f2" path="/var/lib/kubelet/pods/6087189c-c5d3-4586-95a7-3d7cfd01b5f2/volumes" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.826990 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-54sj9"] Feb 19 07:02:58 crc kubenswrapper[5012]: E0219 07:02:58.827450 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6087189c-c5d3-4586-95a7-3d7cfd01b5f2" containerName="container-00" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.827470 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6087189c-c5d3-4586-95a7-3d7cfd01b5f2" containerName="container-00" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.827727 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6087189c-c5d3-4586-95a7-3d7cfd01b5f2" containerName="container-00" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.828561 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.875832 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad8e11d7-43c8-4590-b524-64c0ca3a440b-host\") pod \"crc-debug-54sj9\" (UID: \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\") " pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.876195 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf5kx\" (UniqueName: \"kubernetes.io/projected/ad8e11d7-43c8-4590-b524-64c0ca3a440b-kube-api-access-kf5kx\") pod \"crc-debug-54sj9\" (UID: \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\") " pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.978394 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad8e11d7-43c8-4590-b524-64c0ca3a440b-host\") pod \"crc-debug-54sj9\" (UID: \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\") " pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.978510 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf5kx\" (UniqueName: \"kubernetes.io/projected/ad8e11d7-43c8-4590-b524-64c0ca3a440b-kube-api-access-kf5kx\") pod \"crc-debug-54sj9\" (UID: \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\") " pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.978610 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad8e11d7-43c8-4590-b524-64c0ca3a440b-host\") pod \"crc-debug-54sj9\" (UID: \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\") " pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:02:59 crc kubenswrapper[5012]: I0219 07:02:59.003976 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf5kx\" (UniqueName: \"kubernetes.io/projected/ad8e11d7-43c8-4590-b524-64c0ca3a440b-kube-api-access-kf5kx\") pod \"crc-debug-54sj9\" (UID: \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\") " pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:02:59 crc kubenswrapper[5012]: I0219 07:02:59.143544 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:02:59 crc kubenswrapper[5012]: W0219 07:02:59.170414 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad8e11d7_43c8_4590_b524_64c0ca3a440b.slice/crio-e47b2defded8c82f6737a3985228733855b0135be026346719aedc589abd242d WatchSource:0}: Error finding container e47b2defded8c82f6737a3985228733855b0135be026346719aedc589abd242d: Status 404 returned error can't find the container with id e47b2defded8c82f6737a3985228733855b0135be026346719aedc589abd242d Feb 19 07:02:59 crc kubenswrapper[5012]: I0219 07:02:59.410022 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:59 crc kubenswrapper[5012]: I0219 07:02:59.410518 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:59 crc kubenswrapper[5012]: I0219 07:02:59.465125 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/crc-debug-54sj9" event={"ID":"ad8e11d7-43c8-4590-b524-64c0ca3a440b","Type":"ContainerStarted","Data":"783537a7e84f3b0ed638f3eb6a2789d1dd27811c0584c5d95f222e682776f22b"} Feb 19 07:02:59 crc kubenswrapper[5012]: I0219 07:02:59.465197 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/crc-debug-54sj9" event={"ID":"ad8e11d7-43c8-4590-b524-64c0ca3a440b","Type":"ContainerStarted","Data":"e47b2defded8c82f6737a3985228733855b0135be026346719aedc589abd242d"} Feb 19 07:02:59 crc kubenswrapper[5012]: I0219 07:02:59.492289 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8nbsb/crc-debug-54sj9" podStartSLOduration=1.4922652539999999 podStartE2EDuration="1.492265254s" podCreationTimestamp="2026-02-19 07:02:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 07:02:59.47727949 +0000 UTC m=+5875.510602069" watchObservedRunningTime="2026-02-19 07:02:59.492265254 +0000 UTC m=+5875.525587853" Feb 19 07:03:00 crc kubenswrapper[5012]: I0219 07:03:00.473089 5012 generic.go:334] "Generic (PLEG): container finished" podID="ad8e11d7-43c8-4590-b524-64c0ca3a440b" containerID="783537a7e84f3b0ed638f3eb6a2789d1dd27811c0584c5d95f222e682776f22b" exitCode=0 Feb 19 07:03:00 crc kubenswrapper[5012]: I0219 07:03:00.473128 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/crc-debug-54sj9" event={"ID":"ad8e11d7-43c8-4590-b524-64c0ca3a440b","Type":"ContainerDied","Data":"783537a7e84f3b0ed638f3eb6a2789d1dd27811c0584c5d95f222e682776f22b"} Feb 19 07:03:00 crc kubenswrapper[5012]: I0219 07:03:00.498981 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zwzdr" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="registry-server" probeResult="failure" output=< Feb 19 07:03:00 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 07:03:00 crc kubenswrapper[5012]: > Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.578025 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.641553 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-54sj9"] Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.650644 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-54sj9"] Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.721845 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf5kx\" (UniqueName: \"kubernetes.io/projected/ad8e11d7-43c8-4590-b524-64c0ca3a440b-kube-api-access-kf5kx\") pod \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\" (UID: \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\") " Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.721888 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad8e11d7-43c8-4590-b524-64c0ca3a440b-host\") pod \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\" (UID: \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\") " Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.722461 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad8e11d7-43c8-4590-b524-64c0ca3a440b-host" (OuterVolumeSpecName: "host") pod "ad8e11d7-43c8-4590-b524-64c0ca3a440b" (UID: "ad8e11d7-43c8-4590-b524-64c0ca3a440b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.728514 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad8e11d7-43c8-4590-b524-64c0ca3a440b-kube-api-access-kf5kx" (OuterVolumeSpecName: "kube-api-access-kf5kx") pod "ad8e11d7-43c8-4590-b524-64c0ca3a440b" (UID: "ad8e11d7-43c8-4590-b524-64c0ca3a440b"). InnerVolumeSpecName "kube-api-access-kf5kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.824605 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf5kx\" (UniqueName: \"kubernetes.io/projected/ad8e11d7-43c8-4590-b524-64c0ca3a440b-kube-api-access-kf5kx\") on node \"crc\" DevicePath \"\"" Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.824895 5012 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad8e11d7-43c8-4590-b524-64c0ca3a440b-host\") on node \"crc\" DevicePath \"\"" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.489412 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e47b2defded8c82f6737a3985228733855b0135be026346719aedc589abd242d" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.489459 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.724988 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad8e11d7-43c8-4590-b524-64c0ca3a440b" path="/var/lib/kubelet/pods/ad8e11d7-43c8-4590-b524-64c0ca3a440b/volumes" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.869204 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-g7dqh"] Feb 19 07:03:02 crc kubenswrapper[5012]: E0219 07:03:02.869876 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad8e11d7-43c8-4590-b524-64c0ca3a440b" containerName="container-00" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.869948 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad8e11d7-43c8-4590-b524-64c0ca3a440b" containerName="container-00" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.870214 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad8e11d7-43c8-4590-b524-64c0ca3a440b" containerName="container-00" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.870906 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.949720 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/216e052a-9145-4dba-a625-f9262c5f27cb-host\") pod \"crc-debug-g7dqh\" (UID: \"216e052a-9145-4dba-a625-f9262c5f27cb\") " pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.949996 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtmj8\" (UniqueName: \"kubernetes.io/projected/216e052a-9145-4dba-a625-f9262c5f27cb-kube-api-access-gtmj8\") pod \"crc-debug-g7dqh\" (UID: \"216e052a-9145-4dba-a625-f9262c5f27cb\") " pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:03 crc kubenswrapper[5012]: I0219 07:03:03.051824 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/216e052a-9145-4dba-a625-f9262c5f27cb-host\") pod \"crc-debug-g7dqh\" (UID: \"216e052a-9145-4dba-a625-f9262c5f27cb\") " pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:03 crc kubenswrapper[5012]: I0219 07:03:03.052156 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtmj8\" (UniqueName: \"kubernetes.io/projected/216e052a-9145-4dba-a625-f9262c5f27cb-kube-api-access-gtmj8\") pod \"crc-debug-g7dqh\" (UID: \"216e052a-9145-4dba-a625-f9262c5f27cb\") " pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:03 crc kubenswrapper[5012]: I0219 07:03:03.052593 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/216e052a-9145-4dba-a625-f9262c5f27cb-host\") pod \"crc-debug-g7dqh\" (UID: \"216e052a-9145-4dba-a625-f9262c5f27cb\") " pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:03 crc kubenswrapper[5012]: I0219 07:03:03.096448 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtmj8\" (UniqueName: \"kubernetes.io/projected/216e052a-9145-4dba-a625-f9262c5f27cb-kube-api-access-gtmj8\") pod \"crc-debug-g7dqh\" (UID: \"216e052a-9145-4dba-a625-f9262c5f27cb\") " pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:03 crc kubenswrapper[5012]: I0219 07:03:03.195627 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:03 crc kubenswrapper[5012]: W0219 07:03:03.242140 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod216e052a_9145_4dba_a625_f9262c5f27cb.slice/crio-c23e6e4d668bfa0ca931eea2640f45e2f115364725ac474ee87b528a8fd5124e WatchSource:0}: Error finding container c23e6e4d668bfa0ca931eea2640f45e2f115364725ac474ee87b528a8fd5124e: Status 404 returned error can't find the container with id c23e6e4d668bfa0ca931eea2640f45e2f115364725ac474ee87b528a8fd5124e Feb 19 07:03:03 crc kubenswrapper[5012]: I0219 07:03:03.514251 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" event={"ID":"216e052a-9145-4dba-a625-f9262c5f27cb","Type":"ContainerStarted","Data":"c23e6e4d668bfa0ca931eea2640f45e2f115364725ac474ee87b528a8fd5124e"} Feb 19 07:03:04 crc kubenswrapper[5012]: I0219 07:03:04.523034 5012 generic.go:334] "Generic (PLEG): container finished" podID="216e052a-9145-4dba-a625-f9262c5f27cb" containerID="a9b96cbca646ccd46816aca3328f7acd269a5cdafce9ec48010be765ddb2c162" exitCode=0 Feb 19 07:03:04 crc kubenswrapper[5012]: I0219 07:03:04.523109 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" event={"ID":"216e052a-9145-4dba-a625-f9262c5f27cb","Type":"ContainerDied","Data":"a9b96cbca646ccd46816aca3328f7acd269a5cdafce9ec48010be765ddb2c162"} Feb 19 07:03:04 crc kubenswrapper[5012]: I0219 07:03:04.561719 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-g7dqh"] Feb 19 07:03:04 crc kubenswrapper[5012]: I0219 07:03:04.573370 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-g7dqh"] Feb 19 07:03:04 crc kubenswrapper[5012]: I0219 07:03:04.713203 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:03:04 crc kubenswrapper[5012]: E0219 07:03:04.713721 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:03:05 crc kubenswrapper[5012]: I0219 07:03:05.647605 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:05 crc kubenswrapper[5012]: I0219 07:03:05.807980 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtmj8\" (UniqueName: \"kubernetes.io/projected/216e052a-9145-4dba-a625-f9262c5f27cb-kube-api-access-gtmj8\") pod \"216e052a-9145-4dba-a625-f9262c5f27cb\" (UID: \"216e052a-9145-4dba-a625-f9262c5f27cb\") " Feb 19 07:03:05 crc kubenswrapper[5012]: I0219 07:03:05.808101 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/216e052a-9145-4dba-a625-f9262c5f27cb-host\") pod \"216e052a-9145-4dba-a625-f9262c5f27cb\" (UID: \"216e052a-9145-4dba-a625-f9262c5f27cb\") " Feb 19 07:03:05 crc kubenswrapper[5012]: I0219 07:03:05.808280 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216e052a-9145-4dba-a625-f9262c5f27cb-host" (OuterVolumeSpecName: "host") pod "216e052a-9145-4dba-a625-f9262c5f27cb" (UID: "216e052a-9145-4dba-a625-f9262c5f27cb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 07:03:05 crc kubenswrapper[5012]: I0219 07:03:05.813968 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216e052a-9145-4dba-a625-f9262c5f27cb-kube-api-access-gtmj8" (OuterVolumeSpecName: "kube-api-access-gtmj8") pod "216e052a-9145-4dba-a625-f9262c5f27cb" (UID: "216e052a-9145-4dba-a625-f9262c5f27cb"). InnerVolumeSpecName "kube-api-access-gtmj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:03:05 crc kubenswrapper[5012]: I0219 07:03:05.910002 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtmj8\" (UniqueName: \"kubernetes.io/projected/216e052a-9145-4dba-a625-f9262c5f27cb-kube-api-access-gtmj8\") on node \"crc\" DevicePath \"\"" Feb 19 07:03:05 crc kubenswrapper[5012]: I0219 07:03:05.910032 5012 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/216e052a-9145-4dba-a625-f9262c5f27cb-host\") on node \"crc\" DevicePath \"\"" Feb 19 07:03:06 crc kubenswrapper[5012]: I0219 07:03:06.543110 5012 scope.go:117] "RemoveContainer" containerID="a9b96cbca646ccd46816aca3328f7acd269a5cdafce9ec48010be765ddb2c162" Feb 19 07:03:06 crc kubenswrapper[5012]: I0219 07:03:06.543257 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:06 crc kubenswrapper[5012]: I0219 07:03:06.719361 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216e052a-9145-4dba-a625-f9262c5f27cb" path="/var/lib/kubelet/pods/216e052a-9145-4dba-a625-f9262c5f27cb/volumes" Feb 19 07:03:09 crc kubenswrapper[5012]: I0219 07:03:09.477689 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:03:09 crc kubenswrapper[5012]: I0219 07:03:09.538894 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:03:09 crc kubenswrapper[5012]: I0219 07:03:09.728366 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zwzdr"] Feb 19 07:03:10 crc kubenswrapper[5012]: I0219 07:03:10.582949 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zwzdr" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="registry-server" containerID="cri-o://5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2" gracePeriod=2 Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.030127 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.206062 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmjm9\" (UniqueName: \"kubernetes.io/projected/f1690cd8-3b2d-461b-810a-4958ef591f15-kube-api-access-cmjm9\") pod \"f1690cd8-3b2d-461b-810a-4958ef591f15\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.206252 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-utilities\") pod \"f1690cd8-3b2d-461b-810a-4958ef591f15\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.206408 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-catalog-content\") pod \"f1690cd8-3b2d-461b-810a-4958ef591f15\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.206991 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-utilities" (OuterVolumeSpecName: "utilities") pod "f1690cd8-3b2d-461b-810a-4958ef591f15" (UID: "f1690cd8-3b2d-461b-810a-4958ef591f15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.211810 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1690cd8-3b2d-461b-810a-4958ef591f15-kube-api-access-cmjm9" (OuterVolumeSpecName: "kube-api-access-cmjm9") pod "f1690cd8-3b2d-461b-810a-4958ef591f15" (UID: "f1690cd8-3b2d-461b-810a-4958ef591f15"). InnerVolumeSpecName "kube-api-access-cmjm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.310146 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.310185 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmjm9\" (UniqueName: \"kubernetes.io/projected/f1690cd8-3b2d-461b-810a-4958ef591f15-kube-api-access-cmjm9\") on node \"crc\" DevicePath \"\"" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.322883 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1690cd8-3b2d-461b-810a-4958ef591f15" (UID: "f1690cd8-3b2d-461b-810a-4958ef591f15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.411532 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.591102 5012 generic.go:334] "Generic (PLEG): container finished" podID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerID="5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2" exitCode=0 Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.591140 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwzdr" event={"ID":"f1690cd8-3b2d-461b-810a-4958ef591f15","Type":"ContainerDied","Data":"5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2"} Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.591155 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.591171 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwzdr" event={"ID":"f1690cd8-3b2d-461b-810a-4958ef591f15","Type":"ContainerDied","Data":"dfe2599a1e23379af3070d43b629a6fe3b0a2d40d5bdd99f900388c40aebed26"} Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.591189 5012 scope.go:117] "RemoveContainer" containerID="5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.610014 5012 scope.go:117] "RemoveContainer" containerID="82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.622205 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zwzdr"] Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.629398 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zwzdr"] Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.640887 5012 scope.go:117] "RemoveContainer" containerID="2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.669243 5012 scope.go:117] "RemoveContainer" containerID="5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2" Feb 19 07:03:11 crc kubenswrapper[5012]: E0219 07:03:11.669708 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2\": container with ID starting with 5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2 not found: ID does not exist" containerID="5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.669769 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2"} err="failed to get container status \"5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2\": rpc error: code = NotFound desc = could not find container \"5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2\": container with ID starting with 5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2 not found: ID does not exist" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.669797 5012 scope.go:117] "RemoveContainer" containerID="82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75" Feb 19 07:03:11 crc kubenswrapper[5012]: E0219 07:03:11.670189 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75\": container with ID starting with 82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75 not found: ID does not exist" containerID="82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.670219 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75"} err="failed to get container status \"82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75\": rpc error: code = NotFound desc = could not find container \"82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75\": container with ID starting with 82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75 not found: ID does not exist" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.670239 5012 scope.go:117] "RemoveContainer" containerID="2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8" Feb 19 07:03:11 crc kubenswrapper[5012]: E0219 07:03:11.670719 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8\": container with ID starting with 2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8 not found: ID does not exist" containerID="2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.670754 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8"} err="failed to get container status \"2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8\": rpc error: code = NotFound desc = could not find container \"2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8\": container with ID starting with 2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8 not found: ID does not exist" Feb 19 07:03:12 crc kubenswrapper[5012]: I0219 07:03:12.716289 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" path="/var/lib/kubelet/pods/f1690cd8-3b2d-461b-810a-4958ef591f15/volumes" Feb 19 07:03:18 crc kubenswrapper[5012]: I0219 07:03:18.704186 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:03:18 crc kubenswrapper[5012]: E0219 07:03:18.705385 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:03:29 crc kubenswrapper[5012]: I0219 07:03:29.703455 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:03:29 crc kubenswrapper[5012]: E0219 07:03:29.704338 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:03:43 crc kubenswrapper[5012]: I0219 07:03:43.703110 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:03:43 crc kubenswrapper[5012]: E0219 07:03:43.703838 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:03:54 crc kubenswrapper[5012]: I0219 07:03:54.422554 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f669f7d76-2qg4s_875bbaf1-6c43-4474-9f7b-8202b2d5ee1c/barbican-api/0.log" Feb 19 07:03:54 crc kubenswrapper[5012]: I0219 07:03:54.577752 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f669f7d76-2qg4s_875bbaf1-6c43-4474-9f7b-8202b2d5ee1c/barbican-api-log/0.log" Feb 19 07:03:54 crc kubenswrapper[5012]: I0219 07:03:54.624180 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bb75756b-hd4xs_ee216ad2-2baf-4bba-a3fe-81acf9218af0/barbican-keystone-listener/0.log" Feb 19 07:03:54 crc kubenswrapper[5012]: I0219 07:03:54.738263 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bb75756b-hd4xs_ee216ad2-2baf-4bba-a3fe-81acf9218af0/barbican-keystone-listener-log/0.log" Feb 19 07:03:54 crc kubenswrapper[5012]: I0219 07:03:54.824731 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-779bfc8b79-ffj7v_9133f0f1-2d9e-462e-ba56-8a206f61bd03/barbican-worker/0.log" Feb 19 07:03:54 crc kubenswrapper[5012]: I0219 07:03:54.916398 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-779bfc8b79-ffj7v_9133f0f1-2d9e-462e-ba56-8a206f61bd03/barbican-worker-log/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.043951 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb_ebf47868-aec9-4f2e-8c08-499161f45b18/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.184093 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9647feae-5291-41e1-9bb4-631f661552b9/ceilometer-central-agent/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.209686 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9647feae-5291-41e1-9bb4-631f661552b9/ceilometer-notification-agent/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.300472 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9647feae-5291-41e1-9bb4-631f661552b9/proxy-httpd/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.334320 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9647feae-5291-41e1-9bb4-631f661552b9/sg-core/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.508097 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4c548edc-6755-4310-9b8d-780a384ec6bd/cinder-api-log/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.693342 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_42946b07-c256-43a7-99d0-45f94c019663/cinder-scheduler/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.699701 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4c548edc-6755-4310-9b8d-780a384ec6bd/cinder-api/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.749328 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_42946b07-c256-43a7-99d0-45f94c019663/probe/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.928929 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8sh74_a37d4335-7c06-4fa3-af51-6cfe6fb9a020/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.969513 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-bg5db_8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:03:56 crc kubenswrapper[5012]: I0219 07:03:56.100462 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-567c7bc999-cgf2v_c2eab861-ab13-4ab1-b57f-fecf9e95b9be/init/0.log" Feb 19 07:03:56 crc kubenswrapper[5012]: I0219 07:03:56.310938 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-567c7bc999-cgf2v_c2eab861-ab13-4ab1-b57f-fecf9e95b9be/init/0.log" Feb 19 07:03:56 crc kubenswrapper[5012]: I0219 07:03:56.390807 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-l597r_02358307-dba6-44fa-9799-2440b1496c55/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:03:56 crc kubenswrapper[5012]: I0219 07:03:56.441207 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-567c7bc999-cgf2v_c2eab861-ab13-4ab1-b57f-fecf9e95b9be/dnsmasq-dns/0.log" Feb 19 07:03:56 crc kubenswrapper[5012]: I0219 07:03:56.560899 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8cfddc12-1c4c-4faf-9edb-71fb80608785/glance-log/0.log" Feb 19 07:03:56 crc kubenswrapper[5012]: I0219 07:03:56.597797 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8cfddc12-1c4c-4faf-9edb-71fb80608785/glance-httpd/0.log" Feb 19 07:03:56 crc kubenswrapper[5012]: I0219 07:03:56.932831 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f55309b7-09e5-4496-8995-f03681386729/glance-log/0.log" Feb 19 07:03:56 crc kubenswrapper[5012]: I0219 07:03:56.958106 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f55309b7-09e5-4496-8995-f03681386729/glance-httpd/0.log" Feb 19 07:03:57 crc kubenswrapper[5012]: I0219 07:03:57.133016 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cdcb467fb-8tvnz_6c937bbe-f068-4e5b-81ad-9455104062da/horizon/0.log" Feb 19 07:03:57 crc kubenswrapper[5012]: I0219 07:03:57.223558 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5_d869003b-7b03-4a8b-9f9c-73ca0ec4f359/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:03:57 crc kubenswrapper[5012]: I0219 07:03:57.478138 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-kjhk7_0037b322-99bb-4ae2-aba4-85ddcd8243ae/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:03:57 crc kubenswrapper[5012]: I0219 07:03:57.699280 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29524681-x9bcr_86c7e36d-88e3-432a-ad6f-74de626c5f30/keystone-cron/0.log" Feb 19 07:03:57 crc kubenswrapper[5012]: I0219 07:03:57.817191 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cdcb467fb-8tvnz_6c937bbe-f068-4e5b-81ad-9455104062da/horizon-log/0.log" Feb 19 07:03:57 crc kubenswrapper[5012]: I0219 07:03:57.896862 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29524741-6zcg8_033cc9db-2d87-48a6-8854-4d3a922a38d2/keystone-cron/0.log" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.032844 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7b574779c9-x2bsv_0e0a6a9f-d11f-4084-9742-7780b20fae75/keystone-api/0.log" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.045789 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_cc79bf66-4a34-43fe-ad03-4e6ce60d2c44/kube-state-metrics/0.log" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.123953 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-2n79s_fcace677-35b0-499f-998c-99168fbfa0af/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.481417 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2_534720dc-6ff8-4fdc-9337-6fe77ad1eaa8/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.600433 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5ff88b6c7c-5bg66_eb805277-3dfc-4810-9845-3ba928d262c2/neutron-httpd/0.log" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.674092 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5ff88b6c7c-5bg66_eb805277-3dfc-4810-9845-3ba928d262c2/neutron-api/0.log" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.681660 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_3c628866-f96d-4e7b-8846-7073c98dd389/setup-container/0.log" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.702655 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.996713 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_3c628866-f96d-4e7b-8846-7073c98dd389/setup-container/0.log" Feb 19 07:03:59 crc kubenswrapper[5012]: I0219 07:03:59.092568 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"af85ae40f975af2e29f1da72c10ee6d4757cf3bb8cc11b605a9e59a2b37a565b"} Feb 19 07:03:59 crc kubenswrapper[5012]: I0219 07:03:59.092878 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_3c628866-f96d-4e7b-8846-7073c98dd389/rabbitmq/0.log" Feb 19 07:03:59 crc kubenswrapper[5012]: I0219 07:03:59.699575 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_6852caab-c1b6-40cd-b5df-88d22f6016bd/nova-cell0-conductor-conductor/0.log" Feb 19 07:04:00 crc kubenswrapper[5012]: I0219 07:04:00.050585 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_aceef718-9d1c-441d-bf1b-92c0a6831def/nova-cell1-conductor-conductor/0.log" Feb 19 07:04:00 crc kubenswrapper[5012]: I0219 07:04:00.367577 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_661e04e4-4ba2-4ea0-9ba6-3af2949e7e21/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 07:04:00 crc kubenswrapper[5012]: I0219 07:04:00.479333 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c1a529b0-65f7-4680-a4fd-4dacebc1ab83/nova-api-log/0.log" Feb 19 07:04:00 crc kubenswrapper[5012]: I0219 07:04:00.530175 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-p67w4_a6116441-2985-4723-9889-6c3422159243/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:04:00 crc kubenswrapper[5012]: I0219 07:04:00.998371 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c1a529b0-65f7-4680-a4fd-4dacebc1ab83/nova-api-api/0.log" Feb 19 07:04:01 crc kubenswrapper[5012]: I0219 07:04:01.001896 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_396b18f9-9859-4b42-aca1-c29c3724c86c/nova-metadata-log/0.log" Feb 19 07:04:01 crc kubenswrapper[5012]: I0219 07:04:01.348791 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_04466d10-2177-4361-bd86-333c046b9e52/mysql-bootstrap/0.log" Feb 19 07:04:01 crc kubenswrapper[5012]: I0219 07:04:01.538637 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_04466d10-2177-4361-bd86-333c046b9e52/mysql-bootstrap/0.log" Feb 19 07:04:01 crc kubenswrapper[5012]: I0219 07:04:01.551852 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0/nova-scheduler-scheduler/0.log" Feb 19 07:04:01 crc kubenswrapper[5012]: I0219 07:04:01.616476 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_04466d10-2177-4361-bd86-333c046b9e52/galera/0.log" Feb 19 07:04:01 crc kubenswrapper[5012]: I0219 07:04:01.775193 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1fd0c672-e258-4feb-8bbd-26135f92f7fb/mysql-bootstrap/0.log" Feb 19 07:04:01 crc kubenswrapper[5012]: I0219 07:04:01.973187 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1fd0c672-e258-4feb-8bbd-26135f92f7fb/mysql-bootstrap/0.log" Feb 19 07:04:02 crc kubenswrapper[5012]: I0219 07:04:02.007963 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1fd0c672-e258-4feb-8bbd-26135f92f7fb/galera/0.log" Feb 19 07:04:02 crc kubenswrapper[5012]: I0219 07:04:02.227751 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_75258dbe-c223-4e55-92a6-8e588745294a/openstackclient/0.log" Feb 19 07:04:02 crc kubenswrapper[5012]: I0219 07:04:02.316809 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cr94m_e2c9ac17-43ef-4ccb-83b1-e20ee03289de/ovn-controller/0.log" Feb 19 07:04:02 crc kubenswrapper[5012]: I0219 07:04:02.464070 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mz9j9_c711491e-0b8b-4737-88c9-bc5e37051ac1/openstack-network-exporter/0.log" Feb 19 07:04:02 crc kubenswrapper[5012]: I0219 07:04:02.721715 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7qdpg_16fbaba1-bd32-4121-8743-99422db74180/ovsdb-server-init/0.log" Feb 19 07:04:02 crc kubenswrapper[5012]: I0219 07:04:02.956988 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7qdpg_16fbaba1-bd32-4121-8743-99422db74180/ovsdb-server/0.log" Feb 19 07:04:02 crc kubenswrapper[5012]: I0219 07:04:02.966071 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7qdpg_16fbaba1-bd32-4121-8743-99422db74180/ovsdb-server-init/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.157082 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_396b18f9-9859-4b42-aca1-c29c3724c86c/nova-metadata-metadata/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.220054 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-gxxmx_7335769e-5b13-4d1b-8aa7-e7f192ee9e2b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.382390 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7qdpg_16fbaba1-bd32-4121-8743-99422db74180/ovs-vswitchd/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.427833 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e3e8f67d-0748-4bff-b7c5-8432c7e4ab64/openstack-network-exporter/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.467240 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e3e8f67d-0748-4bff-b7c5-8432c7e4ab64/ovn-northd/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.646455 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5a9e6735-4159-4248-a8f5-6714d386901a/openstack-network-exporter/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.661471 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5a9e6735-4159-4248-a8f5-6714d386901a/ovsdbserver-nb/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.863802 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_00790bd0-5fbb-4927-8361-085c9691c171/openstack-network-exporter/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.890711 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_00790bd0-5fbb-4927-8361-085c9691c171/ovsdbserver-sb/0.log" Feb 19 07:04:04 crc kubenswrapper[5012]: I0219 07:04:04.185425 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/init-config-reloader/0.log" Feb 19 07:04:04 crc kubenswrapper[5012]: I0219 07:04:04.249116 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f94997dd8-cvnfv_b0ce1e0a-4e51-408c-b3f8-500cf6476b96/placement-api/0.log" Feb 19 07:04:04 crc kubenswrapper[5012]: I0219 07:04:04.342840 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f94997dd8-cvnfv_b0ce1e0a-4e51-408c-b3f8-500cf6476b96/placement-log/0.log" Feb 19 07:04:04 crc kubenswrapper[5012]: I0219 07:04:04.365480 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/init-config-reloader/0.log" Feb 19 07:04:04 crc kubenswrapper[5012]: I0219 07:04:04.435979 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/config-reloader/0.log" Feb 19 07:04:04 crc kubenswrapper[5012]: I0219 07:04:04.493120 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/prometheus/0.log" Feb 19 07:04:04 crc kubenswrapper[5012]: I0219 07:04:04.584231 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/thanos-sidecar/0.log" Feb 19 07:04:04 crc kubenswrapper[5012]: I0219 07:04:04.694564 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4984f0c1-33e8-4506-b6d7-e554dca0e4c8/setup-container/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.081954 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4984f0c1-33e8-4506-b6d7-e554dca0e4c8/setup-container/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.144583 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4984f0c1-33e8-4506-b6d7-e554dca0e4c8/rabbitmq/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.218197 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c3230f97-dbe4-42a2-b009-a8370c601e78/setup-container/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.442482 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c3230f97-dbe4-42a2-b009-a8370c601e78/setup-container/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.459633 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c3230f97-dbe4-42a2-b009-a8370c601e78/rabbitmq/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.483709 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs_464de984-0dd6-4c4d-aed3-afbf84e0cdcf/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.673760 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-skvzd_07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.739026 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-pl267_61bd41ab-cfea-4df2-9be0-8321c6c11ebd/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.920473 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7xnxl_86b984ed-bd52-4348-9415-dccff4a0e1a4/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.007272 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9rlns_f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc/ssh-known-hosts-edpm-deployment/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.235777 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-59bfbf7475-v98h9_4c9aa274-240d-4d50-b38a-754dd493f351/proxy-server/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.382158 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-59bfbf7475-v98h9_4c9aa274-240d-4d50-b38a-754dd493f351/proxy-httpd/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.425762 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-5vxhd_d05da3bc-6c22-4956-9fab-331eed79d175/swift-ring-rebalance/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.581956 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/account-auditor/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.631961 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/account-reaper/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.775383 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/account-replicator/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.791154 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/account-server/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.849791 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/container-auditor/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.908998 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/container-replicator/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.977559 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/container-server/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.016774 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/container-updater/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.101100 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-auditor/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.171470 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-expirer/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.211929 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-replicator/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.275572 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-server/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.358397 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-updater/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.401883 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/rsync/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.483660 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/swift-recon-cron/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.708955 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx_73fe066f-3ee6-4ffc-aeb4-874c14fb0b84/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.768622 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_54eccb09-b3ec-45bc-8065-4c5eb9516257/tempest-tests-tempest-tests-runner/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.893348 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_78c125a8-bf69-4524-9b70-be9fe9f313e7/test-operator-logs-container/0.log" Feb 19 07:04:08 crc kubenswrapper[5012]: I0219 07:04:08.016602 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6_cdccd552-e703-4d8d-86b4-ff481671527f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:04:08 crc kubenswrapper[5012]: I0219 07:04:08.922139 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_d4778529-f7d0-482b-bd67-003aaa7ca0ae/watcher-applier/0.log" Feb 19 07:04:09 crc kubenswrapper[5012]: I0219 07:04:09.404664 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7d74d5de-7e1d-47cc-8aaa-cb303332a03a/watcher-api-log/0.log" Feb 19 07:04:12 crc kubenswrapper[5012]: I0219 07:04:12.041852 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_f87036fc-fa94-4038-8b65-bb85d8ff6f10/watcher-decision-engine/0.log" Feb 19 07:04:13 crc kubenswrapper[5012]: I0219 07:04:13.307859 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7d74d5de-7e1d-47cc-8aaa-cb303332a03a/watcher-api/0.log" Feb 19 07:04:23 crc kubenswrapper[5012]: I0219 07:04:23.716284 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_38a4a51f-c380-48fc-8f0e-cdd1ea09fa53/memcached/0.log" Feb 19 07:04:41 crc kubenswrapper[5012]: I0219 07:04:41.904782 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/util/0.log" Feb 19 07:04:42 crc kubenswrapper[5012]: I0219 07:04:42.069081 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/util/0.log" Feb 19 07:04:42 crc kubenswrapper[5012]: I0219 07:04:42.113692 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/pull/0.log" Feb 19 07:04:42 crc kubenswrapper[5012]: I0219 07:04:42.126458 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/pull/0.log" Feb 19 07:04:42 crc kubenswrapper[5012]: I0219 07:04:42.309991 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/extract/0.log" Feb 19 07:04:42 crc kubenswrapper[5012]: I0219 07:04:42.332613 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/util/0.log" Feb 19 07:04:42 crc kubenswrapper[5012]: I0219 07:04:42.359634 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/pull/0.log" Feb 19 07:04:42 crc kubenswrapper[5012]: I0219 07:04:42.728334 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-kt4nw_11d49fcd-6e31-47e5-84a1-c6ae972e13cb/manager/0.log" Feb 19 07:04:43 crc kubenswrapper[5012]: I0219 07:04:43.059209 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-qzq7x_8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be/manager/0.log" Feb 19 07:04:43 crc kubenswrapper[5012]: I0219 07:04:43.216924 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-5szxp_bfca307c-9b00-4c12-bdd6-a394b7cc7cfd/manager/0.log" Feb 19 07:04:43 crc kubenswrapper[5012]: I0219 07:04:43.476427 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-csct6_4f281b5b-b656-4d4a-b628-d4bfe4fc94f9/manager/0.log" Feb 19 07:04:44 crc kubenswrapper[5012]: I0219 07:04:44.002967 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-dgldv_8629b5e4-e6a8-4c47-b76b-f58a26b42912/manager/0.log" Feb 19 07:04:44 crc kubenswrapper[5012]: I0219 07:04:44.208994 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-cp8kx_996bfd61-486b-432d-9e09-d3a90ff9124c/manager/0.log" Feb 19 07:04:44 crc kubenswrapper[5012]: I0219 07:04:44.543827 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-9zkvx_dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43/manager/0.log" Feb 19 07:04:44 crc kubenswrapper[5012]: I0219 07:04:44.683345 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-ldrx5_e9e07b56-2724-4046-8a60-81b751fb0588/manager/0.log" Feb 19 07:04:44 crc kubenswrapper[5012]: I0219 07:04:44.857121 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-556xv_8af03a54-ad7a-4684-b5a6-ba83f410e6ed/manager/0.log" Feb 19 07:04:44 crc kubenswrapper[5012]: I0219 07:04:44.888983 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-rpbt8_1e872b11-03d6-4d3f-8e06-e10e1e73d917/manager/0.log" Feb 19 07:04:45 crc kubenswrapper[5012]: I0219 07:04:45.139655 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-27hfc_b123191d-e55b-4ddc-90ea-abcb34c97be2/manager/0.log" Feb 19 07:04:45 crc kubenswrapper[5012]: I0219 07:04:45.325960 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-l65c5_457202a7-ae9f-4d06-8690-d220e532b305/manager/0.log" Feb 19 07:04:45 crc kubenswrapper[5012]: I0219 07:04:45.826115 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4_d6eb3922-90e6-4bb1-8caa-aac6b69c76b0/manager/0.log" Feb 19 07:04:46 crc kubenswrapper[5012]: I0219 07:04:46.111141 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6679bf9b57-q57bk_76b34ac4-96f1-4bbc-9969-eb3e1cfc2159/operator/0.log" Feb 19 07:04:46 crc kubenswrapper[5012]: I0219 07:04:46.495262 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cl447_797c14cf-1b4d-4b4e-9dc5-4843e2e77cef/registry-server/0.log" Feb 19 07:04:47 crc kubenswrapper[5012]: I0219 07:04:47.142121 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-25qtj_10e6fa53-581b-4965-8a38-c70a5c61c6d7/manager/0.log" Feb 19 07:04:47 crc kubenswrapper[5012]: I0219 07:04:47.237026 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-nlqtw_08a4f79c-e42e-4609-b104-01b9a05ac95a/manager/0.log" Feb 19 07:04:47 crc kubenswrapper[5012]: I0219 07:04:47.442564 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mqc2w_4a3cde05-282a-4c65-9570-74d04c71a034/operator/0.log" Feb 19 07:04:47 crc kubenswrapper[5012]: I0219 07:04:47.669862 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-6hfg4_c55ed223-371b-409a-bcb6-8ca6d2a3c908/manager/0.log" Feb 19 07:04:48 crc kubenswrapper[5012]: I0219 07:04:48.050830 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-69ff7bc449-tj54n_d1f124a8-4132-458d-a5a5-1839d31e7772/manager/0.log" Feb 19 07:04:48 crc kubenswrapper[5012]: I0219 07:04:48.203413 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-qjpw6_49d66f3b-e451-4b73-bc6a-4b854a71a4d6/manager/0.log" Feb 19 07:04:48 crc kubenswrapper[5012]: I0219 07:04:48.250188 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-pcpk8_73e25e30-860d-4faf-b1f3-bc284f7189d1/manager/0.log" Feb 19 07:04:48 crc kubenswrapper[5012]: I0219 07:04:48.456929 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-pqrs7_ef60eda4-7ead-499b-b70f-07a34574096f/manager/0.log" Feb 19 07:04:48 crc kubenswrapper[5012]: I0219 07:04:48.488273 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-z5r47_739941d0-4bff-4dae-8f01-636386a37dd0/manager/0.log" Feb 19 07:04:53 crc kubenswrapper[5012]: I0219 07:04:53.319990 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-xzk2n_0cc1b41b-fbf6-4d0c-b721-dcad09c03feb/manager/0.log" Feb 19 07:05:10 crc kubenswrapper[5012]: I0219 07:05:10.997222 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mbxqf_9102ddf1-e140-48e7-9ecd-14a4c007f5d5/control-plane-machine-set-operator/0.log" Feb 19 07:05:11 crc kubenswrapper[5012]: I0219 07:05:11.226403 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6qvzq_5c537eae-5a27-4a4d-ba9e-0fd7efe72f37/kube-rbac-proxy/0.log" Feb 19 07:05:11 crc kubenswrapper[5012]: I0219 07:05:11.233806 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6qvzq_5c537eae-5a27-4a4d-ba9e-0fd7efe72f37/machine-api-operator/0.log" Feb 19 07:05:25 crc kubenswrapper[5012]: I0219 07:05:25.355286 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-sq68l_3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02/cert-manager-controller/0.log" Feb 19 07:05:25 crc kubenswrapper[5012]: I0219 07:05:25.532669 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-w66zf_4b5870bd-8fb3-4eef-a893-f31ce8bb1506/cert-manager-cainjector/0.log" Feb 19 07:05:25 crc kubenswrapper[5012]: I0219 07:05:25.567192 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-drndq_53138562-0907-4b72-b228-21ef0c561f57/cert-manager-webhook/0.log" Feb 19 07:05:39 crc kubenswrapper[5012]: I0219 07:05:39.830737 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-zvl62_0aad4d6c-fc60-4843-b21b-d4ad6d552d5f/nmstate-console-plugin/0.log" Feb 19 07:05:40 crc kubenswrapper[5012]: I0219 07:05:40.064178 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tdz8p_4b5e9e17-84bc-4d05-87f9-328826ea39df/nmstate-handler/0.log" Feb 19 07:05:40 crc kubenswrapper[5012]: I0219 07:05:40.184757 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-hn274_91d45b3f-23b3-4342-8168-667f665ffe82/nmstate-metrics/0.log" Feb 19 07:05:40 crc kubenswrapper[5012]: I0219 07:05:40.196481 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-hn274_91d45b3f-23b3-4342-8168-667f665ffe82/kube-rbac-proxy/0.log" Feb 19 07:05:40 crc kubenswrapper[5012]: I0219 07:05:40.340719 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-2smgj_d6ac1260-4ff8-4025-af6e-35711452ef6f/nmstate-operator/0.log" Feb 19 07:05:40 crc kubenswrapper[5012]: I0219 07:05:40.402399 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-mqtfh_50749fb3-e43e-4874-a0ea-8dabae225f85/nmstate-webhook/0.log" Feb 19 07:05:55 crc kubenswrapper[5012]: I0219 07:05:55.751977 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9t66t_9f3d925a-f08d-4e92-baf3-805f27c9ae35/prometheus-operator/0.log" Feb 19 07:05:55 crc kubenswrapper[5012]: I0219 07:05:55.908570 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685558f558-cddcp_9364b7f3-e3e3-4432-a4e7-4b80c9a50225/prometheus-operator-admission-webhook/0.log" Feb 19 07:05:55 crc kubenswrapper[5012]: I0219 07:05:55.995411 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685558f558-rlcjg_3c60bb85-2242-4d9f-95f9-27b2e747727d/prometheus-operator-admission-webhook/0.log" Feb 19 07:05:56 crc kubenswrapper[5012]: I0219 07:05:56.117622 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-vw7xl_63ee166b-5027-4928-9196-9488685f87d5/operator/0.log" Feb 19 07:05:56 crc kubenswrapper[5012]: I0219 07:05:56.153324 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-5grbr_86bcbf15-9553-41af-974c-3418e588e575/perses-operator/0.log" Feb 19 07:06:10 crc kubenswrapper[5012]: I0219 07:06:10.894224 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-c4jbq_fe949ecf-1cb7-47c7-b196-d4851f142c5f/kube-rbac-proxy/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.025364 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-c4jbq_fe949ecf-1cb7-47c7-b196-d4851f142c5f/controller/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.131135 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-frr-files/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.279662 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-frr-files/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.283533 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-reloader/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.308057 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-metrics/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.335754 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-reloader/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.507189 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-frr-files/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.524097 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-metrics/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.542754 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-reloader/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.549868 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-metrics/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.667196 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-frr-files/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.672002 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-reloader/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.720527 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-metrics/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.730977 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/controller/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.861132 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/frr-metrics/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.951694 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/kube-rbac-proxy/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.958543 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/kube-rbac-proxy-frr/0.log" Feb 19 07:06:12 crc kubenswrapper[5012]: I0219 07:06:12.040566 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/reloader/0.log" Feb 19 07:06:12 crc kubenswrapper[5012]: I0219 07:06:12.190432 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-hdb84_431a9bf4-479e-4255-9664-554c80fa4376/frr-k8s-webhook-server/0.log" Feb 19 07:06:12 crc kubenswrapper[5012]: I0219 07:06:12.378994 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-558c5c4774-9r4gj_05b78fff-bf4d-4cd6-aba9-b74303a5dd50/manager/0.log" Feb 19 07:06:12 crc kubenswrapper[5012]: I0219 07:06:12.535606 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-699bc447bd-zqv74_ec7fdada-6f6e-4d8b-b2e1-c944050c714c/webhook-server/0.log" Feb 19 07:06:12 crc kubenswrapper[5012]: I0219 07:06:12.719258 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-87ct4_82cb6684-3937-45f8-9f18-56940e88f480/kube-rbac-proxy/0.log" Feb 19 07:06:13 crc kubenswrapper[5012]: I0219 07:06:13.267147 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-87ct4_82cb6684-3937-45f8-9f18-56940e88f480/speaker/0.log" Feb 19 07:06:13 crc kubenswrapper[5012]: I0219 07:06:13.506630 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/frr/0.log" Feb 19 07:06:14 crc kubenswrapper[5012]: I0219 07:06:14.430252 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 07:06:14 crc kubenswrapper[5012]: I0219 07:06:14.430321 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 07:06:28 crc kubenswrapper[5012]: I0219 07:06:28.506128 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/util/0.log" Feb 19 07:06:28 crc kubenswrapper[5012]: I0219 07:06:28.735134 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/pull/0.log" Feb 19 07:06:28 crc kubenswrapper[5012]: I0219 07:06:28.747777 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/util/0.log" Feb 19 07:06:28 crc kubenswrapper[5012]: I0219 07:06:28.776152 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/pull/0.log" Feb 19 07:06:28 crc kubenswrapper[5012]: I0219 07:06:28.958520 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/pull/0.log" Feb 19 07:06:28 crc kubenswrapper[5012]: I0219 07:06:28.973914 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/util/0.log" Feb 19 07:06:28 crc kubenswrapper[5012]: I0219 07:06:28.991662 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/extract/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.102494 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/util/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.320257 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/util/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.341122 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/pull/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.375800 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/pull/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.563988 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/extract/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.565414 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/pull/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.593932 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/util/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.758355 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-utilities/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.944444 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-content/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.952717 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-content/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.966060 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-utilities/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.143367 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-utilities/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.146551 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-content/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.363512 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-utilities/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.664836 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-utilities/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.701760 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-content/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.731249 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-content/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.803782 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/registry-server/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.845461 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-utilities/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.872392 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-content/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.126525 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/util/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.185633 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/registry-server/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.298987 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/pull/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.345019 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/util/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.362361 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/pull/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.486761 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/pull/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.517577 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/extract/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.520263 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/util/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.657864 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jqjls_800f8349-6ef3-44ae-90a0-56c89ca82479/marketplace-operator/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.705545 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-utilities/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.898536 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-content/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.898857 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-utilities/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.903759 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-content/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.073844 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-content/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.100653 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-utilities/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.291072 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/registry-server/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.303934 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-utilities/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.457500 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-utilities/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.495719 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-content/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.515755 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-content/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.844427 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-content/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.863775 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-utilities/0.log" Feb 19 07:06:33 crc kubenswrapper[5012]: I0219 07:06:33.555820 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/registry-server/0.log" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.430959 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.431519 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.661232 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bpf8b"] Feb 19 07:06:44 crc kubenswrapper[5012]: E0219 07:06:44.661713 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216e052a-9145-4dba-a625-f9262c5f27cb" containerName="container-00" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.661729 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="216e052a-9145-4dba-a625-f9262c5f27cb" containerName="container-00" Feb 19 07:06:44 crc kubenswrapper[5012]: E0219 07:06:44.661745 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="extract-utilities" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.661752 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="extract-utilities" Feb 19 07:06:44 crc kubenswrapper[5012]: E0219 07:06:44.661770 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="registry-server" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.661776 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="registry-server" Feb 19 07:06:44 crc kubenswrapper[5012]: E0219 07:06:44.661798 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="extract-content" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.661804 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="extract-content" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.661980 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="registry-server" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.661990 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="216e052a-9145-4dba-a625-f9262c5f27cb" containerName="container-00" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.663241 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.691711 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpf8b"] Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.742574 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl584\" (UniqueName: \"kubernetes.io/projected/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-kube-api-access-nl584\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.742672 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-catalog-content\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.743007 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-utilities\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.845857 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-utilities\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.846036 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl584\" (UniqueName: \"kubernetes.io/projected/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-kube-api-access-nl584\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.846066 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-catalog-content\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.846556 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-utilities\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.846580 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-catalog-content\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.873799 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl584\" (UniqueName: \"kubernetes.io/projected/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-kube-api-access-nl584\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.988585 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:45 crc kubenswrapper[5012]: I0219 07:06:45.475511 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpf8b"] Feb 19 07:06:45 crc kubenswrapper[5012]: I0219 07:06:45.791800 5012 generic.go:334] "Generic (PLEG): container finished" podID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" containerID="347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577" exitCode=0 Feb 19 07:06:45 crc kubenswrapper[5012]: I0219 07:06:45.791991 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpf8b" event={"ID":"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323","Type":"ContainerDied","Data":"347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577"} Feb 19 07:06:45 crc kubenswrapper[5012]: I0219 07:06:45.792092 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpf8b" event={"ID":"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323","Type":"ContainerStarted","Data":"6067c6764c5a57d594d7f6654dd024f673786b231a6517e6b1c39cbb30d6b688"} Feb 19 07:06:45 crc kubenswrapper[5012]: I0219 07:06:45.793964 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 07:06:46 crc kubenswrapper[5012]: I0219 07:06:46.918916 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685558f558-rlcjg_3c60bb85-2242-4d9f-95f9-27b2e747727d/prometheus-operator-admission-webhook/0.log" Feb 19 07:06:46 crc kubenswrapper[5012]: I0219 07:06:46.938004 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685558f558-cddcp_9364b7f3-e3e3-4432-a4e7-4b80c9a50225/prometheus-operator-admission-webhook/0.log" Feb 19 07:06:46 crc kubenswrapper[5012]: I0219 07:06:46.979400 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9t66t_9f3d925a-f08d-4e92-baf3-805f27c9ae35/prometheus-operator/0.log" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.051391 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-745ds"] Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.053277 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.064823 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-745ds"] Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.093924 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-catalog-content\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.093981 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-utilities\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.094408 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f88j\" (UniqueName: \"kubernetes.io/projected/db783a8c-66a5-431b-bdb4-672b0e8991f1-kube-api-access-2f88j\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.150020 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-5grbr_86bcbf15-9553-41af-974c-3418e588e575/perses-operator/0.log" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.155463 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-vw7xl_63ee166b-5027-4928-9196-9488685f87d5/operator/0.log" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.195848 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f88j\" (UniqueName: \"kubernetes.io/projected/db783a8c-66a5-431b-bdb4-672b0e8991f1-kube-api-access-2f88j\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.195952 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-catalog-content\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.195983 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-utilities\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.196478 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-catalog-content\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.196504 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-utilities\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.214799 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f88j\" (UniqueName: \"kubernetes.io/projected/db783a8c-66a5-431b-bdb4-672b0e8991f1-kube-api-access-2f88j\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.577167 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.821812 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpf8b" event={"ID":"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323","Type":"ContainerStarted","Data":"2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1"} Feb 19 07:06:48 crc kubenswrapper[5012]: I0219 07:06:48.128244 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-745ds"] Feb 19 07:06:48 crc kubenswrapper[5012]: W0219 07:06:48.158936 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb783a8c_66a5_431b_bdb4_672b0e8991f1.slice/crio-664820afd3683b476014c50f161c941619369e5d67fbd8aec3b3d67bfa28ed97 WatchSource:0}: Error finding container 664820afd3683b476014c50f161c941619369e5d67fbd8aec3b3d67bfa28ed97: Status 404 returned error can't find the container with id 664820afd3683b476014c50f161c941619369e5d67fbd8aec3b3d67bfa28ed97 Feb 19 07:06:48 crc kubenswrapper[5012]: I0219 07:06:48.835935 5012 generic.go:334] "Generic (PLEG): container finished" podID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerID="d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb" exitCode=0 Feb 19 07:06:48 crc kubenswrapper[5012]: I0219 07:06:48.836005 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745ds" event={"ID":"db783a8c-66a5-431b-bdb4-672b0e8991f1","Type":"ContainerDied","Data":"d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb"} Feb 19 07:06:48 crc kubenswrapper[5012]: I0219 07:06:48.836326 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745ds" event={"ID":"db783a8c-66a5-431b-bdb4-672b0e8991f1","Type":"ContainerStarted","Data":"664820afd3683b476014c50f161c941619369e5d67fbd8aec3b3d67bfa28ed97"} Feb 19 07:06:48 crc kubenswrapper[5012]: I0219 07:06:48.840491 5012 generic.go:334] "Generic (PLEG): container finished" podID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" containerID="2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1" exitCode=0 Feb 19 07:06:48 crc kubenswrapper[5012]: I0219 07:06:48.840519 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpf8b" event={"ID":"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323","Type":"ContainerDied","Data":"2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1"} Feb 19 07:06:49 crc kubenswrapper[5012]: I0219 07:06:49.859613 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpf8b" event={"ID":"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323","Type":"ContainerStarted","Data":"89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db"} Feb 19 07:06:49 crc kubenswrapper[5012]: I0219 07:06:49.866245 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745ds" event={"ID":"db783a8c-66a5-431b-bdb4-672b0e8991f1","Type":"ContainerStarted","Data":"64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd"} Feb 19 07:06:49 crc kubenswrapper[5012]: I0219 07:06:49.935604 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bpf8b" podStartSLOduration=2.452900948 podStartE2EDuration="5.93558546s" podCreationTimestamp="2026-02-19 07:06:44 +0000 UTC" firstStartedPulling="2026-02-19 07:06:45.79376172 +0000 UTC m=+6101.827084289" lastFinishedPulling="2026-02-19 07:06:49.276446232 +0000 UTC m=+6105.309768801" observedRunningTime="2026-02-19 07:06:49.892963323 +0000 UTC m=+6105.926285892" watchObservedRunningTime="2026-02-19 07:06:49.93558546 +0000 UTC m=+6105.968908029" Feb 19 07:06:50 crc kubenswrapper[5012]: I0219 07:06:50.877658 5012 generic.go:334] "Generic (PLEG): container finished" podID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerID="64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd" exitCode=0 Feb 19 07:06:50 crc kubenswrapper[5012]: I0219 07:06:50.877740 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745ds" event={"ID":"db783a8c-66a5-431b-bdb4-672b0e8991f1","Type":"ContainerDied","Data":"64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd"} Feb 19 07:06:51 crc kubenswrapper[5012]: E0219 07:06:51.076917 5012 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.110:58546->38.102.83.110:36123: read tcp 38.102.83.110:58546->38.102.83.110:36123: read: connection reset by peer Feb 19 07:06:51 crc kubenswrapper[5012]: I0219 07:06:51.891291 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745ds" event={"ID":"db783a8c-66a5-431b-bdb4-672b0e8991f1","Type":"ContainerStarted","Data":"64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55"} Feb 19 07:06:51 crc kubenswrapper[5012]: I0219 07:06:51.907147 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-745ds" podStartSLOduration=2.45812469 podStartE2EDuration="4.907132478s" podCreationTimestamp="2026-02-19 07:06:47 +0000 UTC" firstStartedPulling="2026-02-19 07:06:48.838155865 +0000 UTC m=+6104.871478434" lastFinishedPulling="2026-02-19 07:06:51.287163653 +0000 UTC m=+6107.320486222" observedRunningTime="2026-02-19 07:06:51.904553615 +0000 UTC m=+6107.937876184" watchObservedRunningTime="2026-02-19 07:06:51.907132478 +0000 UTC m=+6107.940455047" Feb 19 07:06:53 crc kubenswrapper[5012]: E0219 07:06:53.682231 5012 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.110:43550->38.102.83.110:36123: write tcp 38.102.83.110:43550->38.102.83.110:36123: write: broken pipe Feb 19 07:06:54 crc kubenswrapper[5012]: I0219 07:06:54.988816 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:54 crc kubenswrapper[5012]: I0219 07:06:54.989140 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:55 crc kubenswrapper[5012]: I0219 07:06:55.065440 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:55 crc kubenswrapper[5012]: I0219 07:06:55.976222 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:56 crc kubenswrapper[5012]: I0219 07:06:56.248866 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpf8b"] Feb 19 07:06:57 crc kubenswrapper[5012]: I0219 07:06:57.578077 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:57 crc kubenswrapper[5012]: I0219 07:06:57.579212 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:57 crc kubenswrapper[5012]: I0219 07:06:57.947775 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bpf8b" podUID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" containerName="registry-server" containerID="cri-o://89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db" gracePeriod=2 Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.474298 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.592505 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl584\" (UniqueName: \"kubernetes.io/projected/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-kube-api-access-nl584\") pod \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.592596 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-catalog-content\") pod \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.592623 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-utilities\") pod \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.593435 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-utilities" (OuterVolumeSpecName: "utilities") pod "3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" (UID: "3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.618937 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" (UID: "3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.619567 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-kube-api-access-nl584" (OuterVolumeSpecName: "kube-api-access-nl584") pod "3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" (UID: "3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323"). InnerVolumeSpecName "kube-api-access-nl584". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.633621 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-745ds" podUID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerName="registry-server" probeResult="failure" output=< Feb 19 07:06:58 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 07:06:58 crc kubenswrapper[5012]: > Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.694910 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl584\" (UniqueName: \"kubernetes.io/projected/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-kube-api-access-nl584\") on node \"crc\" DevicePath \"\"" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.694938 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.694949 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.957317 5012 generic.go:334] "Generic (PLEG): container finished" podID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" containerID="89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db" exitCode=0 Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.957411 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpf8b" event={"ID":"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323","Type":"ContainerDied","Data":"89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db"} Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.957484 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpf8b" event={"ID":"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323","Type":"ContainerDied","Data":"6067c6764c5a57d594d7f6654dd024f673786b231a6517e6b1c39cbb30d6b688"} Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.957502 5012 scope.go:117] "RemoveContainer" containerID="89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.957380 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.979367 5012 scope.go:117] "RemoveContainer" containerID="2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.996893 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpf8b"] Feb 19 07:06:59 crc kubenswrapper[5012]: I0219 07:06:59.010103 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpf8b"] Feb 19 07:06:59 crc kubenswrapper[5012]: I0219 07:06:59.017051 5012 scope.go:117] "RemoveContainer" containerID="347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577" Feb 19 07:06:59 crc kubenswrapper[5012]: I0219 07:06:59.074716 5012 scope.go:117] "RemoveContainer" containerID="89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db" Feb 19 07:06:59 crc kubenswrapper[5012]: E0219 07:06:59.075424 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db\": container with ID starting with 89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db not found: ID does not exist" containerID="89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db" Feb 19 07:06:59 crc kubenswrapper[5012]: I0219 07:06:59.075474 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db"} err="failed to get container status \"89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db\": rpc error: code = NotFound desc = could not find container \"89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db\": container with ID starting with 89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db not found: ID does not exist" Feb 19 07:06:59 crc kubenswrapper[5012]: I0219 07:06:59.075506 5012 scope.go:117] "RemoveContainer" containerID="2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1" Feb 19 07:06:59 crc kubenswrapper[5012]: E0219 07:06:59.075734 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1\": container with ID starting with 2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1 not found: ID does not exist" containerID="2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1" Feb 19 07:06:59 crc kubenswrapper[5012]: I0219 07:06:59.075759 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1"} err="failed to get container status \"2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1\": rpc error: code = NotFound desc = could not find container \"2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1\": container with ID starting with 2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1 not found: ID does not exist" Feb 19 07:06:59 crc kubenswrapper[5012]: I0219 07:06:59.075777 5012 scope.go:117] "RemoveContainer" containerID="347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577" Feb 19 07:06:59 crc kubenswrapper[5012]: E0219 07:06:59.075963 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577\": container with ID starting with 347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577 not found: ID does not exist" containerID="347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577" Feb 19 07:06:59 crc kubenswrapper[5012]: I0219 07:06:59.075989 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577"} err="failed to get container status \"347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577\": rpc error: code = NotFound desc = could not find container \"347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577\": container with ID starting with 347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577 not found: ID does not exist" Feb 19 07:07:00 crc kubenswrapper[5012]: I0219 07:07:00.711732 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" path="/var/lib/kubelet/pods/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323/volumes" Feb 19 07:07:07 crc kubenswrapper[5012]: I0219 07:07:07.659814 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-745ds" Feb 19 07:07:07 crc kubenswrapper[5012]: I0219 07:07:07.731758 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-745ds" Feb 19 07:07:07 crc kubenswrapper[5012]: I0219 07:07:07.903504 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-745ds"] Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.065865 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-745ds" podUID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerName="registry-server" containerID="cri-o://64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55" gracePeriod=2 Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.671209 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-745ds" Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.728109 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f88j\" (UniqueName: \"kubernetes.io/projected/db783a8c-66a5-431b-bdb4-672b0e8991f1-kube-api-access-2f88j\") pod \"db783a8c-66a5-431b-bdb4-672b0e8991f1\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.728618 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-utilities\") pod \"db783a8c-66a5-431b-bdb4-672b0e8991f1\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.728726 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-catalog-content\") pod \"db783a8c-66a5-431b-bdb4-672b0e8991f1\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.729653 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-utilities" (OuterVolumeSpecName: "utilities") pod "db783a8c-66a5-431b-bdb4-672b0e8991f1" (UID: "db783a8c-66a5-431b-bdb4-672b0e8991f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.736367 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db783a8c-66a5-431b-bdb4-672b0e8991f1-kube-api-access-2f88j" (OuterVolumeSpecName: "kube-api-access-2f88j") pod "db783a8c-66a5-431b-bdb4-672b0e8991f1" (UID: "db783a8c-66a5-431b-bdb4-672b0e8991f1"). InnerVolumeSpecName "kube-api-access-2f88j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.800247 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db783a8c-66a5-431b-bdb4-672b0e8991f1" (UID: "db783a8c-66a5-431b-bdb4-672b0e8991f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.831906 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.831947 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.831964 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f88j\" (UniqueName: \"kubernetes.io/projected/db783a8c-66a5-431b-bdb4-672b0e8991f1-kube-api-access-2f88j\") on node \"crc\" DevicePath \"\"" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.083952 5012 generic.go:334] "Generic (PLEG): container finished" podID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerID="64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55" exitCode=0 Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.083998 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745ds" event={"ID":"db783a8c-66a5-431b-bdb4-672b0e8991f1","Type":"ContainerDied","Data":"64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55"} Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.084033 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745ds" event={"ID":"db783a8c-66a5-431b-bdb4-672b0e8991f1","Type":"ContainerDied","Data":"664820afd3683b476014c50f161c941619369e5d67fbd8aec3b3d67bfa28ed97"} Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.084054 5012 scope.go:117] "RemoveContainer" containerID="64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.084060 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-745ds" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.113690 5012 scope.go:117] "RemoveContainer" containerID="64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.136525 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-745ds"] Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.159221 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-745ds"] Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.170030 5012 scope.go:117] "RemoveContainer" containerID="d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.220498 5012 scope.go:117] "RemoveContainer" containerID="64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55" Feb 19 07:07:10 crc kubenswrapper[5012]: E0219 07:07:10.221036 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55\": container with ID starting with 64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55 not found: ID does not exist" containerID="64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.221086 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55"} err="failed to get container status \"64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55\": rpc error: code = NotFound desc = could not find container \"64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55\": container with ID starting with 64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55 not found: ID does not exist" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.221119 5012 scope.go:117] "RemoveContainer" containerID="64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd" Feb 19 07:07:10 crc kubenswrapper[5012]: E0219 07:07:10.221542 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd\": container with ID starting with 64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd not found: ID does not exist" containerID="64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.221569 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd"} err="failed to get container status \"64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd\": rpc error: code = NotFound desc = could not find container \"64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd\": container with ID starting with 64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd not found: ID does not exist" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.221586 5012 scope.go:117] "RemoveContainer" containerID="d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb" Feb 19 07:07:10 crc kubenswrapper[5012]: E0219 07:07:10.221834 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb\": container with ID starting with d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb not found: ID does not exist" containerID="d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.221855 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb"} err="failed to get container status \"d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb\": rpc error: code = NotFound desc = could not find container \"d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb\": container with ID starting with d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb not found: ID does not exist" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.720860 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db783a8c-66a5-431b-bdb4-672b0e8991f1" path="/var/lib/kubelet/pods/db783a8c-66a5-431b-bdb4-672b0e8991f1/volumes" Feb 19 07:07:14 crc kubenswrapper[5012]: I0219 07:07:14.431335 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 07:07:14 crc kubenswrapper[5012]: I0219 07:07:14.431945 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 07:07:14 crc kubenswrapper[5012]: I0219 07:07:14.432012 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 07:07:14 crc kubenswrapper[5012]: I0219 07:07:14.433046 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af85ae40f975af2e29f1da72c10ee6d4757cf3bb8cc11b605a9e59a2b37a565b"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 07:07:14 crc kubenswrapper[5012]: I0219 07:07:14.433142 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://af85ae40f975af2e29f1da72c10ee6d4757cf3bb8cc11b605a9e59a2b37a565b" gracePeriod=600 Feb 19 07:07:15 crc kubenswrapper[5012]: I0219 07:07:15.153253 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="af85ae40f975af2e29f1da72c10ee6d4757cf3bb8cc11b605a9e59a2b37a565b" exitCode=0 Feb 19 07:07:15 crc kubenswrapper[5012]: I0219 07:07:15.153480 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"af85ae40f975af2e29f1da72c10ee6d4757cf3bb8cc11b605a9e59a2b37a565b"} Feb 19 07:07:15 crc kubenswrapper[5012]: I0219 07:07:15.153722 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3"} Feb 19 07:07:15 crc kubenswrapper[5012]: I0219 07:07:15.153750 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:08:42 crc kubenswrapper[5012]: I0219 07:08:42.297674 5012 generic.go:334] "Generic (PLEG): container finished" podID="91bc1236-3737-44f8-a82a-35044bd3258b" containerID="746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251" exitCode=0 Feb 19 07:08:42 crc kubenswrapper[5012]: I0219 07:08:42.297779 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/must-gather-znn9c" event={"ID":"91bc1236-3737-44f8-a82a-35044bd3258b","Type":"ContainerDied","Data":"746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251"} Feb 19 07:08:42 crc kubenswrapper[5012]: I0219 07:08:42.299693 5012 scope.go:117] "RemoveContainer" containerID="746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251" Feb 19 07:08:42 crc kubenswrapper[5012]: I0219 07:08:42.827434 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8nbsb_must-gather-znn9c_91bc1236-3737-44f8-a82a-35044bd3258b/gather/0.log" Feb 19 07:08:43 crc kubenswrapper[5012]: I0219 07:08:43.728700 5012 scope.go:117] "RemoveContainer" containerID="8925602fab983c962e968ddbebc86a948cd3945bd659b4613398d3aca81b02b2" Feb 19 07:08:54 crc kubenswrapper[5012]: I0219 07:08:54.418276 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8nbsb/must-gather-znn9c"] Feb 19 07:08:54 crc kubenswrapper[5012]: I0219 07:08:54.419261 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-8nbsb/must-gather-znn9c" podUID="91bc1236-3737-44f8-a82a-35044bd3258b" containerName="copy" containerID="cri-o://e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d" gracePeriod=2 Feb 19 07:08:54 crc kubenswrapper[5012]: I0219 07:08:54.431082 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8nbsb/must-gather-znn9c"] Feb 19 07:08:54 crc kubenswrapper[5012]: I0219 07:08:54.877713 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8nbsb_must-gather-znn9c_91bc1236-3737-44f8-a82a-35044bd3258b/copy/0.log" Feb 19 07:08:54 crc kubenswrapper[5012]: I0219 07:08:54.878597 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:08:54 crc kubenswrapper[5012]: I0219 07:08:54.966170 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh8mp\" (UniqueName: \"kubernetes.io/projected/91bc1236-3737-44f8-a82a-35044bd3258b-kube-api-access-fh8mp\") pod \"91bc1236-3737-44f8-a82a-35044bd3258b\" (UID: \"91bc1236-3737-44f8-a82a-35044bd3258b\") " Feb 19 07:08:54 crc kubenswrapper[5012]: I0219 07:08:54.966563 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/91bc1236-3737-44f8-a82a-35044bd3258b-must-gather-output\") pod \"91bc1236-3737-44f8-a82a-35044bd3258b\" (UID: \"91bc1236-3737-44f8-a82a-35044bd3258b\") " Feb 19 07:08:54 crc kubenswrapper[5012]: I0219 07:08:54.972099 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91bc1236-3737-44f8-a82a-35044bd3258b-kube-api-access-fh8mp" (OuterVolumeSpecName: "kube-api-access-fh8mp") pod "91bc1236-3737-44f8-a82a-35044bd3258b" (UID: "91bc1236-3737-44f8-a82a-35044bd3258b"). InnerVolumeSpecName "kube-api-access-fh8mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.068717 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh8mp\" (UniqueName: \"kubernetes.io/projected/91bc1236-3737-44f8-a82a-35044bd3258b-kube-api-access-fh8mp\") on node \"crc\" DevicePath \"\"" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.180483 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91bc1236-3737-44f8-a82a-35044bd3258b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "91bc1236-3737-44f8-a82a-35044bd3258b" (UID: "91bc1236-3737-44f8-a82a-35044bd3258b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.272530 5012 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/91bc1236-3737-44f8-a82a-35044bd3258b-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.445707 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8nbsb_must-gather-znn9c_91bc1236-3737-44f8-a82a-35044bd3258b/copy/0.log" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.446154 5012 generic.go:334] "Generic (PLEG): container finished" podID="91bc1236-3737-44f8-a82a-35044bd3258b" containerID="e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d" exitCode=143 Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.446223 5012 scope.go:117] "RemoveContainer" containerID="e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.446243 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.469514 5012 scope.go:117] "RemoveContainer" containerID="746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.592714 5012 scope.go:117] "RemoveContainer" containerID="e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d" Feb 19 07:08:55 crc kubenswrapper[5012]: E0219 07:08:55.593210 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d\": container with ID starting with e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d not found: ID does not exist" containerID="e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.593259 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d"} err="failed to get container status \"e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d\": rpc error: code = NotFound desc = could not find container \"e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d\": container with ID starting with e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d not found: ID does not exist" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.593284 5012 scope.go:117] "RemoveContainer" containerID="746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251" Feb 19 07:08:55 crc kubenswrapper[5012]: E0219 07:08:55.593839 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251\": container with ID starting with 746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251 not found: ID does not exist" containerID="746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.593897 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251"} err="failed to get container status \"746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251\": rpc error: code = NotFound desc = could not find container \"746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251\": container with ID starting with 746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251 not found: ID does not exist" Feb 19 07:08:56 crc kubenswrapper[5012]: I0219 07:08:56.722860 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91bc1236-3737-44f8-a82a-35044bd3258b" path="/var/lib/kubelet/pods/91bc1236-3737-44f8-a82a-35044bd3258b/volumes" Feb 19 07:09:14 crc kubenswrapper[5012]: I0219 07:09:14.431260 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 07:09:14 crc kubenswrapper[5012]: I0219 07:09:14.431961 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 07:09:43 crc kubenswrapper[5012]: I0219 07:09:43.797550 5012 scope.go:117] "RemoveContainer" containerID="783537a7e84f3b0ed638f3eb6a2789d1dd27811c0584c5d95f222e682776f22b" Feb 19 07:09:44 crc kubenswrapper[5012]: I0219 07:09:44.431732 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 07:09:44 crc kubenswrapper[5012]: I0219 07:09:44.431827 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 07:10:14 crc kubenswrapper[5012]: I0219 07:10:14.431033 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 07:10:14 crc kubenswrapper[5012]: I0219 07:10:14.431592 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 07:10:14 crc kubenswrapper[5012]: I0219 07:10:14.431641 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 07:10:14 crc kubenswrapper[5012]: I0219 07:10:14.432447 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 07:10:14 crc kubenswrapper[5012]: I0219 07:10:14.432522 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" gracePeriod=600 Feb 19 07:10:14 crc kubenswrapper[5012]: E0219 07:10:14.562420 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:10:15 crc kubenswrapper[5012]: I0219 07:10:15.366936 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" exitCode=0 Feb 19 07:10:15 crc kubenswrapper[5012]: I0219 07:10:15.367012 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3"} Feb 19 07:10:15 crc kubenswrapper[5012]: I0219 07:10:15.367088 5012 scope.go:117] "RemoveContainer" containerID="af85ae40f975af2e29f1da72c10ee6d4757cf3bb8cc11b605a9e59a2b37a565b" Feb 19 07:10:15 crc kubenswrapper[5012]: I0219 07:10:15.368108 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:10:15 crc kubenswrapper[5012]: E0219 07:10:15.368923 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:10:29 crc kubenswrapper[5012]: I0219 07:10:29.702102 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:10:29 crc kubenswrapper[5012]: E0219 07:10:29.702788 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:10:44 crc kubenswrapper[5012]: I0219 07:10:44.709051 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:10:44 crc kubenswrapper[5012]: E0219 07:10:44.710127 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:10:57 crc kubenswrapper[5012]: I0219 07:10:57.704387 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:10:57 crc kubenswrapper[5012]: E0219 07:10:57.705856 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:11:10 crc kubenswrapper[5012]: I0219 07:11:10.703432 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:11:10 crc kubenswrapper[5012]: E0219 07:11:10.704799 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:11:24 crc kubenswrapper[5012]: I0219 07:11:24.711978 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:11:24 crc kubenswrapper[5012]: E0219 07:11:24.713183 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:11:35 crc kubenswrapper[5012]: I0219 07:11:35.703502 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:11:35 crc kubenswrapper[5012]: E0219 07:11:35.704419 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:11:48 crc kubenswrapper[5012]: I0219 07:11:48.703142 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:11:48 crc kubenswrapper[5012]: E0219 07:11:48.704271 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:12:01 crc kubenswrapper[5012]: I0219 07:12:01.703072 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:12:01 crc kubenswrapper[5012]: E0219 07:12:01.703847 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:12:16 crc kubenswrapper[5012]: I0219 07:12:16.703900 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:12:16 crc kubenswrapper[5012]: E0219 07:12:16.705056 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:12:30 crc kubenswrapper[5012]: I0219 07:12:30.702821 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:12:30 crc kubenswrapper[5012]: E0219 07:12:30.703825 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:12:36 crc kubenswrapper[5012]: I0219 07:12:36.562775 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-59bfbf7475-v98h9" podUID="4c9aa274-240d-4d50-b38a-754dd493f351" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502"